Abstract

This paper presents research of an application of a latent semantic analysis (LSA) model for the automatic evaluation of short answers (25 to 70 words) to open-ended questions. In order to reach a viable application of this LSA model, the research goals were as follows: (1) to develop robustness, (2) to increase accuracy, and (3) to widen portability. The methods consisted of the following tasks: firstly, the implementation of word bigrams; secondly, the implementation of combined models of unigrams and bigrams using multiple linear regression; and, finally, the addition of an adjustment step after the score attribution taking into consideration the average of the words of the answers. The corpus was composed by 359 answers produced according to two questions from a Brazilian public university’s entrance examination, which were previously scored by human evaluators. The results demonstrate that the experiments produced accuracy about 84.94 %, while the accuracy of the two human evaluators was about 84.93 %. In conclusion, it can be seen that the automatic evaluation technology shows that it is reaching a high level of efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call