Abstract

Automatic Essay Grading (AEG) system is defined as the computer technology that evaluates and grades written prose. The short essay answer, where the es say is written in short sentences where it has two types the open ended short answer and the close ended sho rt answer where it is our research domain based on the computer subject. The Marking of short essay answers automatically is one of the most complicated domains because it is relying heavily on the semant ic similarity in meaning refers to the degree to wh ich two sentences are similar in the meaning where both used similar words in the meaning, in this case Humans are able to easily judge if a concepts are r elated to each other, there for is a problem when S tudent use a synonym words during the answer in case they forget the target answer and they use their alterna tive words in the answer which will be different from th e Model answer that prepared by the structure. The Standard text similarity measures perform poorly on such tasks. Short answer only provides a limited content, because the length of the text is typicall y short, ranging from a single word to a dozen word s. This research has two propose; the first propose is Alternative Sentence Generator Method in order to generate the alternative model answer by connecting the method with the synonym dictionary. The second proposed three algorithms combined together in matching phase, Commons Words (COW), Longest Common Subsequence (LCS) and Semantic Distance (SD), these algorithms have been successfully used in many Natural Language Processi ng systems and have yielded efficient results. The system was manually tested on 40 questions answered by three students and evaluated by teacher in class. The proposed system has yielded %82 corre lation-style with human grading, which has made the system significantly better than the other stat e of the art systems.

Highlights

  • IntroductionAli Muftah Ben Omran and Mohd Juzaiddin Ab Aziz / Journal of Computer Science 9 (10): 1369-1382, 2013 educational tool-kit, since increased writing with feedback is known to increase the quality of student writing (Yannakoudakis et al, 2011)

  • The second part constitutes two parts, where the first part involves comparing the system result per assignment for all the students with the result of (Mohler and Mihalcea, 2009), where it is the same dataset that was used in order to find out the Pearson correlation in this research and the second part involves comparing with the ASAGS, which measures the correlation between the human grade and the student grade

  • The third stage is used to compare the method with other state of the art methods which is Latent Semantic Analysis (LSA) which used over the same dataset and scored 0.6465 correlated to the human (Mohler and Mihalcea, 2009)

Read more

Summary

Introduction

Ali Muftah Ben Omran and Mohd Juzaiddin Ab Aziz / Journal of Computer Science 9 (10): 1369-1382, 2013 educational tool-kit, since increased writing with feedback is known to increase the quality of student writing (Yannakoudakis et al, 2011). Each of those types has common features to be graded. Turney and Pantel (2010) show that two words are similar to the degree that their contexts are similar; in effect showing that words that keep the same company are very similar or synonymous in meaning From this previous work it follows that texts made up of similar words will tend to be about similar. This research is focused to build efficient automatic essay grading system for short answer in English language based on the proposed methods

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call