Abstract

Rater reliability, the consistency of marking across different raters and times, is one important component of reliability regarding the quality of test scores. It is essential regarding performance assessment, such as writing, when the fairness of assessment results can come into question due to the subjectivity when scoring. The present study, part of a larger-scale funded research project, aimed to study this overlooked area in the Omani context, i.e., the reliability in scoring the writing section of the final exams in the University of Technology and Applied Sciences (UTAS). More specifically, the study investigated the estimates of inter-rater and intra-rater reliability among 10 writing markers assessing 286 and 156 students' writing scripts belonging to four levels of proficiency at three different levels of analysis: the whole writing tests, tasks 1 and tasks 2, and the constituent criteria of both tasks. The results indicated a rather high value of inter-rater reliability and a moderate one for intra-rater reliability in general. However, when interpreted regarding raters' personal and background information, some low estimates shed light on the importance of factors influencing scoring consistency across different assessors and times.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.