Abstract

BackgroundWriting composition is a significant factor for measuring test-takers’ ability in any language exam. However, the assessment (scoring) of these writing compositions or essays is a very challenging process in terms of reliability and time. The need for objective and quick scores has raised the need for a computer system that can automatically grade essay questions targeting specific prompts. Automated Essay Scoring (AES) systems are used to overcome the challenges of scoring writing tasks by using Natural Language Processing (NLP) and machine learning techniques. The purpose of this paper is to review the literature for the AES systems used for grading the essay questions.MethodologyWe have reviewed the existing literature using Google Scholar, EBSCO and ERIC to search for the terms “AES”, “Automated Essay Scoring”, “Automated Essay Grading”, or “Automatic Essay” for essays written in English language. Two categories have been identified: handcrafted features and automatically featured AES systems. The systems of the former category are closely bonded to the quality of the designed features. On the other hand, the systems of the latter category are based on the automatic learning of the features and relations between an essay and its score without any handcrafted features. We reviewed the systems of the two categories in terms of system primary focus, technique(s) used in the system, the need for training data, instructional application (feedback system), and the correlation between e-scores and human scores. The paper includes three main sections. First, we present a structured literature review of the available Handcrafted Features AES systems. Second, we present a structured literature review of the available Automatic Featuring AES systems. Finally, we draw a set of discussions and conclusions.ResultsAES models have been found to utilize a broad range of manually-tuned shallow and deep linguistic features. AES systems have many strengths in reducing labor-intensive marking activities, ensuring a consistent application of scoring criteria, and ensuring the objectivity of scoring. Although many techniques have been implemented to improve the AES systems, three primary challenges have been identified. The challenges are lacking of the sense of the rater as a person, the potential that the systems can be deceived into giving a lower or higher score to an essay than it deserves, and the limited ability to assess the creativity of the ideas and propositions and evaluate their practicality. Many techniques have only been used to address the first two challenges.

Highlights

  • Test items are usually classified into two types: objective or selective-response (SR), and subjective or constructed-response (CR)

  • The purpose of this paper is to review the literature for the Automated Essay Scoring (AES) systems that score 76 the extended-response items in language writing exams

  • Computer technologies, 351 especially Natural Language Processing (NLP) and Artificial Intelligence (AI), have been able to assess the quality of writing using AES technology

Read more

Summary

Introduction

Test items (questions) are usually classified into two types: objective or selective-response (SR), and subjective or constructed-response (CR). Many techniques have been implemented to improve the AES systems, three primary challenges have been concluded: they lack the sense of the rater as a person, they can be tricked into assigning a lower or higher score to an essay than it deserved or not, and they cannot assess the creativity of the ideas and propositions and evaluating their practicality. Many techniques have been implemented to improve the AES systems, three primary challenges have been concluded: they lack the sense of the rater as a person, they can be deceived into giving a lower or higher score to an essay than it deserved or not, and they cannot assess the creativity of the ideas and propositions and evaluating their practicality.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call