Abstract

Automatic assessment of exams is widely preferred by educators than multiple-choice exams because of its efficiency in measuring student performance, lack of subjectivity when evaluating student response, and faster evaluation time than the time consuming manual evaluation. In this study, a new approach for the Automatic Short Answer Grading (ASAG) is proposed using MaLSTM and the sense vectors obtained by SemSpace, a synset based sense embedding method built leveraging WordNet. Synset representations of the Student’s answers and reference answers are given as input into parallel LSTM architecture, they are transformed into sentence representations in the hidden layer and the vectorial similarity of these two representation vectors are computed with Manhattan Similarity in the output layer. The proposed approach has been tested using the Mohler ASAG dataset and successful results are obtained in terms of Pearson (r) correlation and RMSE. Also, the proposed approach has been tested as a case study using a specific dataset (CU-NLP) created from the exam of the “Natural Language Processing” course in the Computer Engineering Department of Cukurova University. And it has achieved a successful correlation. The results obtained in the experiments show that the proposed system can be used efficiently and effectively in context-dependent ASAG tasks.

Highlights

  • Pretrained language models such as BERT, GPT-2, ELMo [1]–[3] based on the processing of large corpora using advanced deep learning methods have taken much attention from Natural Language Processing (NLP) researchers

  • The datasets on which the tests will be carried out are separated into tokens and the correct synset candidates of these tokens are determined by the process of Word Sense Disambiguation (WSD)

  • The SemSpace [15], a synset based contextualized sense embedding approach that aims to find a weight for each relationship and a sense vector for each word defined in the WordNet

Read more

Summary

Introduction

Pretrained language models such as BERT, GPT-2, ELMo [1]–[3] based on the processing of large corpora using advanced deep learning methods have taken much attention from Natural Language Processing (NLP) researchers. Thanks to these language models, it is possible to implement effective downstream NLP applications such as sentiment analysis, social virtual chat robots, or virtual smart robots that answer questions in a specific domain known as automatic question answering systems [4]–[7].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call