Abstract

Exams for universities and boards are administered offline each year. Many students show up for subjective exams. It took a lot of work to manually evaluate such a big number of papers. The evaluation's quality can occasionally fluctuate depending on the evaluator's attitude. The evaluation process takes a lot of time and effort. Objective or multiple choice questions are commonly found in competitive and entrance tests. These tests are reviewed using a machine because that is how they were administered, making evaluation simple. The manual evaluation of subjective papers is a difficult and taxing undertaking. A major obstacle when utilizing artificial intelligence (AI) to analyze subjective articles is a lack of understanding and acceptance of the findings. There have been numerous attempts to use computer science to evaluate student responses. To accomplish this objective, the majority of the effort, however, needs standard counts or precise terms. There are also not enough carefully selected data sets. In order to evaluate descriptive responses automatically, this paper proposes a novel approach that makes use of various machine learning, natural language processing, and tools like WorldNet, Word2vec, word mover's distance (WMD), cosine similarity, Multinomial Naive Bayes (MNB), and term frequency-inverse document frequency (TF-IDF). Responses are assessed using solution statements and keywords, and a machine learning model is built to forecast the grades of responses. Overall, the results indicate that WMD outperforms cosine similarity. The machine learning model could also be employed independently with appropriate training. Without the MNB model, experimentation produces an accuracy of 88%. Using MNB, the error rate is further decreased by 1.3%. Keywords - Subjective answer evaluation, big data, machine

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call