Abstract

AbstractBackgroundAssignments that involve writing based on several texts are challenging to many learners. Formative feedback supporting learners in these tasks should be informed by the characteristics of evolving written product and by the characteristics of learning processes learners enacted while developing the product. However, formative feedback in writing tasks based on multiple texts has almost exclusively focused on essay product and rarely included SRL processes.ObjectivesWe explored the viability of using product and process features to develop machine learning classifiers that identify low‐ and high‐performing essays in a multi‐text writing task.MethodsWe examined learning processes and essay submissions of 163 graduate students working on an authentic multi‐text writing assignment. We utilised learners' trace data to obtain process features and state‐of‐the‐art natural language processing methods to obtain product features for our classifiers.Results and ConclusionsOf four popular classifiers examined in this study, Random Forest achieved the best performance (accuracy = 0.80 and recall = 0.77). The analysis of important features identified in the Random Forest classification model revealed one product (coverage of reading topics) and three process (elaboration/organisation, re‐reading and planning) features as important predictors of writing quality.Major TakeawaysThe classifier can be used as a part of a future automated writing evaluation system that will support at scale formative assessment in writing tasks based on multiple texts in different courses. Based on important predictors of essay performance, a guidance can be tailored to learners at the outset of a multi‐text writing task to help them do well in the task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call