Abstract

Evaluation plays a significant role in English writing course, which is an effective way to assess and motivate the learners. Traditional instructor’s scoring and feedback is considered helpful, yet straining and not efficient enough. Automated writing evaluation (AWE) is the use of specialized computer programs to grade and evaluate writings in educational settings. The idea of integrating machine feedback with human evaluation is supposed to be comprehensive and efficient. This paper is to investigate the reliability and feasibility of the integration of AWE and human evaluation in college English writing course. An empirical study was conducted in a Beijing foreign languages university, where a blended evaluation mode was applied and the feedback from the students verified that through proper design, the integration of AWE and human scoring was fairly feasible and efficient. The findings of this paper may provide some reference and enlightenment for the evaluation mode in college English writing course.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call