Abstract

Automated writing evaluation (AWE) plays an important role in writing pedagogy and has received considerable research attention recently; however, few reviews have been conducted to systematically analyze the recent publications arising from the many studies in this area. The present review aims to provide a comprehensive analysis of the literature on AWE feedback for writing in terms of methodology, types of learners, types of feedback and its applications, learning outcomes, and implications. A total of 48 articles from Social Science Citation Index journals and four other important journals in the field of language education were collected and analyzed. The findings revealed that most previous studies on AWE applied quantitative research methods, rather than purely qualitative ones. The duration of the experiments in approximately 33% of the studies was shorter than ten weeks, and 10% of the studies were of one session only. The group size of over half of the studies had fewer than 30 participants, and 21% of the studies had medium to large group sizes (from 51 to 100). The focus of most of the articles was on L2 writers with little attention paid to L1 writers and K12 students. AWE feedback to some extent can improve students’ writing from the product-oriented aspect but is not as effective as human feedback (e.g. teacher or peer feedback). Students generally considered AWE feedback useful and were motivated when using it, although they noticed a lack of accuracy and explicitness as the feedback tended to be generic and formulaic. The results of the review have several implications for researchers, teachers, and developers of AWE systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call