Abstract

With the developments of technology, writing evaluation and feedback are supported by Natural Language Processing (NLP) and Latent Semantic Analysis (LSA). The immersion of NLP and LSA in Writing contributes to the development of Automatic Writing Evaluation (AWE), a system that compares a text to a large database of the writing of the same genre and supports an evaluation process that is independent of the human rater. The potential of T.E.R.A's reach in evaluation became one of the bases of this research. This research intends to compare the evaluation process of human raters to T.E.R.A. and find out; a) the figuration of assessment criteria used in the evaluation process, and b) which elements rates are higher, the same, or lower. The result of this study shall be a point of departure to establish AWE which is suitable to the context of English as a Foreign Language being taught as a compulsory course at Bina Nusantara University.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call