Abstract

Automated Writing Evaluation (AWE) is one of the machine techniques used for assessing learners' writing. Recently, this technique has been widely implemented for improving learners' editing strategies. Several studies have been conducted to compare self-editing with peer editing. However, only a few studies have compared automated peer and self-editing. To fill this research gap, the present study implements AWE software, WRITER, for peer and self-editing. For this purpose, a pre-post quasi-experimental research design with convenience sampling is done for automated and non-automated editing of cause-effect essay writing. Arab, EFL learners of English, 44 in number, have been assigned to four groups: two peer and self-editing control groups and two automated peer and self-editing experimental groups. There is a triangulation of the quasi-experimental design with qualitative data from retrospective notes and questionnaire responses of the participants during and after automated editing. The quantitative data have been analyzed using non-parametric tests. The qualitative data have undergone thematic and content analysis. The results reveal that the AWE software has positively affected both the peer and self-editing experimental groups. However, no significant difference is detected between them. The analysis of the qualitative data reflects participants' positive evaluation of both the software and the automated peer and self-editing experience.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call