Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/2441
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJayal, A-
dc.contributor.authorShepperd, M J-
dc.coverage.spatial11en
dc.date.accessioned2008-06-25T12:56:06Z-
dc.date.available2008-06-25T12:56:06Z-
dc.date.issued2008-
dc.identifier.citationACM Journal on Educational Resources in Computing (JERIC), 8(4).-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/2441-
dc.description.abstractIn this short paper we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of diagram resides in the labels, however, the choice of labeling is largely unrestricted. This means a correct solution may utilise differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analysing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable.en
dc.format.extent249863 bytes-
dc.format.mimetypeapplication/pdf-
dc.language.isoen-
dc.subjecte-Learningen
dc.subjecte-Assessmenten
dc.subjectempirical researchen
dc.subjectdiagramsen
dc.subjectUMLen
dc.subjecttext processingen
dc.titleThe problem of labels in e-assessment of diagramsen
dc.typeResearch Paperen
Appears in Collections:Computer Science
Dept of Computer Science Research Papers
Software Engineering (B-SERC)

Files in This Item:
File Description SizeFormat 
JERIC_v20.pdf257.85 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.