Abstract

Assessing students' writing ability has always been an important component of research on automatic essay evaluation. The existing methods usually focus on outputting numerical scores, and there is little research on providing feedback on writing quality based on text generation techniques. We believe that the generation of quality feedback in essay writing is more valuable as a reference. In this study, we address the problem of essay feedback generation by proposing an encoder-decoder neural network model called GEEF (Generate Essay Feedback) and suppose that feedbacks are written based on the source essay text and the grading of important writing skills. Besides from the text of input essay, our model also takes in additional features including fluency, coherence, richness, and literary talent. We construct an essay feedback corpus along with human-tagged resources to facilitate the study. Experimental results demonstrate that the proposed approach achieves promising performance compared with other baseline methods on automatic and human evaluation metrics. The generated feedback sentences on different aspects of essay writing can help students understand their strengths and weaknesses in writing skills. In addition, the potential application of this study is that it can assist raters in writing more complex feedback on this basis, as it is often difficult to remember many commonly used feedback patterns when writing comments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call