Abstract

The inclusion of peer feedback activities into the academic writing process has become common practice in higher education. However, while research has shown that students perceive many features of peer feedback to be useful, the actual effectiveness of these features in terms of measurable learning outcomes remains unclear. The aim of this study was to investigate the linguistic and review features of peer feedback and how these might influence peers to accept or reject revision advice offered in the context of academic writing among L2 learners. A corpus-based machine learning approach was employed to test three different algorithms (logistic regression, decision tree, and random forests) on three feature models (linguistic, review, and all features) to determine which algorithm offered the best predictive results and to determine which feature model most accurately predicts implementation. The results indicated that random forests is the most effective way of modeling the different features. In addition, the feature model containing all features most accurately predicted implementation. The findings further suggest that directive comments and multiple peer comments on the same topic included in the feedback process seem to influence implementation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.