Abstract

Using generalizability theory, this study examined both the rating variability and reliability of English as a second language (ESL) students' writing in two provincial examinations in Canada. This article discusses expected and unexpected similarities and differences related to rating variability and reliability between the two testing programs. As expected, there was more desired but less unwanted variation in ESL and native‐English‐speaking (NES) students' writing scores in Province B than in Province A. But unexpectedly, the results demonstrated systematic differences between ESL and NES students in terms of rating variability. Further, there were lower reliabilities in ESL students' scores in comparison to NES students' scores. These findings raise potential concerns about the fairness of large‐scale ESL writing assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call