Abstract

In this paper, an approach on long answer evaluation using lexical and semantic similarity measure has been presented. The goal of this work is to introduce a system which programmatically evaluates the long answers from the examinee and hence reduce the time and effort of human intervention as well as make the evaluation procedure impartial to the entire user. In this work, first the user answers (examinee answer) are matched with standard answers (examiner answer) using lexical similarity measure. In testing phase, five sets of question-answers have been considered where each set contains a single question from a subject domain and its five different answers. The system resolute an acceptable accuracy according to human decision. In the next phase of the work, both the answers have been compared using semantic similarity measure. In this phase, the synonymous words of the keywords from both the answers are retrieved from the semantic dictionary-WordNet to increase the needful and relevant overlap between the answers. Applying this semantic similarity measuring technique on the same question-answer sets, the accuracy of evaluation has been increased which is validated by an expert.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call