Abstract

Writing is generally assessed internationally using rubric-based approaches, but there is a growing body of evidence to suggest that the reliability of such approaches is poor. In contrast, comparative judgement studies suggest that it is possible to assess open ended tasks such as writing with greater reliability. Many previous studies, however, have failed to provide direct comparisons between these approaches as the reliability measures for rubric- and marking-based studies are not comparable with the internal measures of reliability cited by comparative judgement studies. We investigated the classification accuracy and consistency of a rubric-based approach to the grading of writing with a comparative judgement approach. The writing was gathered from 11-year-olds in low stakes settings in England. We present evidence that the comparative judgement approach has twice the classification accuracy of the rubric-based approach and is perfectly viable in terms of its efficiency. We discuss the limitations of the comparisons and consider what a national system for assessing writing based on a comparative judgement approach could look like.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call