Abstract

Automated writing evaluation (AWE) has been frequently used to provide feedback on student writing. Many empirical studies have examined the effectiveness of AWE on writing quality, but the results were inconclusive. Thus, the magnitude of AWE’s overall effect and factors influencing its effectiveness across studies remained unclear. This study re-examined the issue by meta-analyzing the results of 26 primary studies with a total of 2468 participants from 2010 to 2022. The results revealed that AWE had a large positive overall effect on writing quality (g = 0.861, p < 0.001). Further moderator analyses indicated that AWE was more effective for post-secondary students than for secondary students and had more benefits for English as a Foreign Language (EFL) and English as a Second Language (ESL) learners than for Native English Speaker (NES) learners. When the genre of writing was considered, AWE showed a more significant impact on argumentative writing than on academic and mixed writing genres. However, intervention duration, feedback combination, and AWE platform did not moderate the effect of AWE on writing quality. The implications and recommendations for both research and practice are discussed in depth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call