Abstract

Although writing rubrics can provide valuable feedback, the criteria they use are often subjective, which compels raters to employ their own tacit biases. The purpose of this study is to see if discreet empirical characteristics of texts can be used in lieu of the rubric to objectively assess the writing quality of EFL learners. The academic paragraphs of 38 participants were evaluated according to several empirically calculable criteria related to cohesion, content, and grammar. Values were then compared to scores obtained from holistic scoring by multiple raters using a multiple regression formula. The resulting correlation between variables (R = .873) was highly significant, suggesting that more empirical, impartial means of writing evaluation can now be used in conjunction with technology to provide student feedback and teacher training.

Highlights

  • Several studies recognize the efficacy of the rubric as a means to score writing and provide feedback (Cope, Kalantzis, McCarthey, Vojak, & Kline, 2011; Mansilla, Duraisingh, Wolfe, & Haynes, 2009; Peden & Carroll, 2008)

  • Results of this study reveal that several empirically measurable criteria for writing related to cohesion, content, and grammar can be used to predict overall writing quality of EFL learners

  • Empirical evaluation of writing has several advantages over traditional methods of evaluation. It allows for the automation of writing assessment, which opens the door to use of the technology as a means of providing both summative and formative writing feedback for students or teachers

Read more

Summary

Introduction

Several studies recognize the efficacy of the rubric as a means to score writing and provide feedback (Cope, Kalantzis, McCarthey, Vojak, & Kline, 2011; Mansilla, Duraisingh, Wolfe, & Haynes, 2009; Peden & Carroll, 2008). Recent adaptations of the rubric have even discovered the potential to increase formative feedback through the use of both technology and self-assessment strategies (Cope, Kalantzis, McCarthey, Vojak, & Kline, 2011; Peden & Carroll, 2008). While rubrics can provide a systematic means to evaluate student writing, their reliability and validity can be questionable This is exemplified by recent studies, which reveal that rater bias and invalidity of writing assessments are negatively impacting summative student evaluation Educators have advocated the use of more authentic assessment methods such as self-assessment checklists, writing conferences, and writing portfolios (Schulz, 2009)

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.