Abstract

AbstractThe assessment of answers is an important process that requires great effort from evaluators. This assessment process requires high concentration without any fluctuations in mood. This substantiates the need to automate answer script evaluation. Regarding text answer evaluation, sentence similarity measures have been widely used to compare student written answers with reference texts. In this paper, we propose an automated answer evaluation system that uses our proposed cosine-based sentence similarity measures to evaluate the answers. Cosine measures have proved to be effective in comparing between free text student answers and reference texts. Here we propose a set of novel cosine-based sentence similarity measures with varied approaches of creating document vector space. In addition to this, we propose a novel synset-based word similarity measure for computation of document vectors coupled with varied approaches for dimensionality-reduction for reducing vector space dimensions. Thus, we propose 21 cosine-based sentence similarity measures and measured their performance using MSR paraphrase corpus and Li’s benchmark datasets. We also use these measures for automatic answer evaluation system and compare their performances using the Kaggle short answer and essay dataset. The performance of the system-generated scores is compared with the human scores using Pearson correlation. The results show that system and human scores have correlation between each other.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call