Abstract

This paper proposes an approach to evaluate student’s descriptive answers, using comparison-based approach in which student’s answer is compared with the standard answer. The standard answers contains domain specific knowledge as per the category (how, why, what, etc.) of questions asked in the examination. Several state-of-art claims that LSA correlates with the human assessor’s way of evaluation. With this as background, we investigated evaluation of students’ descriptive answer using Latent Semantic Analysis (LSA). In the course of research, it was discovered that standard LSA has limitations like: LSA research usually involves heterogeneous text (text from various domains) which may include irrelevant terms that are highly susceptible to noisy, missing and inconsistent data. We propose a new technique inspired by LSA, denoted as “High Precision Latent Semantic Evaluation” (HPLSE), LSA has been modified to overcome some of the limitations; this has also increased precision. By using the proposed technique (HPLSE), for the same datasets, average score difference and standard deviation between a human assessor and computer assessor has reduced and the Pearson correlation coefficient (r) has increased considerably. The new technique has been discussed and demonstrates on various problem classes.

Highlights

  • The current system of manual evaluation has some limitations due to which it becomes important to automate the descriptive answers evaluation

  • Available systems are useful for essay grading and short answer grading systems, but descriptive answer evaluation system is still an open research issue

  • LSA has been modified to overcome some of these issues/limitations and the proposed technique, denoted as ‘High Precision Latent Semantic Evaluation’ (HPLSE) has been used for automation of descriptive answer evaluation process, with much better results

Read more

Summary

Introduction

The current system of manual evaluation has some limitations due to which it becomes important to automate the descriptive answers evaluation. LSA is a statistical natural language processing (NLP) method for inferring meaning from a text It was developed by researchers at Bellcore as an information retrieval technique (Deerwester et al, 1990) in the late 1980s. LSA has been modified to overcome some of these issues/limitations and the proposed technique, denoted as ‘High Precision Latent Semantic Evaluation’ (HPLSE) has been used for automation of descriptive answer evaluation process, with much better results. Answers) of the students in iteration, to increase the adequacy of the standard/reference answer This issue has been already raised in the introduction section of this report as it’s an important aspect of automation of descriptive answer evaluation process as well. Online tools that support managing of online assessments such as Moodle and Zoho are based on string matching technique for short answers but long answer evaluation is still handled manually by most of the systems. Some of the approaches are based on keyword matching, sequence matching, quantitative analysis, fuzzy system, rule based system which provides some solution for online assessment of answer sheets, but the general descriptive answer evaluation is still an open problem

Literature Review
Methodology
Comparing Results of LSA and HPLSE Technique
Conclusion and Future Scope
Funding Information
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.