Abstract
ABSTRACT We report on a standard-setting project in which the Item-Descriptor-Matching Method (IDM) and a complementary benchmarking approach were employed to align a suite of English language proficiency exams to the Common European Framework of Reference (CEFR), with a particular focus on the integrated and independent writing exams. Judges’ ratings on eight writing tasks and 48 test taker scripts were collected online via SmartSurvey. The judges gave CEFR-level jkudgements for tasks and scripts, they stated which CEFR descriptors they matched against the task demands and scripts, and they evaluated the combined approach and outcomes. Thus, it was possible to monitor how judges applied and interpreted the CEFR descriptors, a prerequisite for establishing alignment validity. Analyses of judgement consistency revealed a high level of consistency on the task judgements and overall performance ratings, yet they also revealed some variations in the selected CEFR descriptors underpinning the judgements. Making these variations transparent facilitated a targeted discussion with an explicit focus on the CEFR, i.e. the framework to which the tests were to be aligned. Overall, the judges reported confidence in using the combined approaches, in their judgements and in the recommended CEFR cut-scores, thus corroborating procedural validity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.