Abstract
In this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analyzing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin despite using a range of text processing algorithms such as stemming and auto-correction of spelling errors. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.