Abstract

ABSTRACT We report on a standard-setting project in which the Item-Descriptor-Matching Method (IDM) and a complementary benchmarking approach were employed to align a suite of English language proficiency exams to the Common European Framework of Reference (CEFR), with a particular focus on the integrated and independent writing exams. Judges’ ratings on eight writing tasks and 48 test taker scripts were collected online via SmartSurvey. The judges gave CEFR-level jkudgements for tasks and scripts, they stated which CEFR descriptors they matched against the task demands and scripts, and they evaluated the combined approach and outcomes. Thus, it was possible to monitor how judges applied and interpreted the CEFR descriptors, a prerequisite for establishing alignment validity. Analyses of judgement consistency revealed a high level of consistency on the task judgements and overall performance ratings, yet they also revealed some variations in the selected CEFR descriptors underpinning the judgements. Making these variations transparent facilitated a targeted discussion with an explicit focus on the CEFR, i.e. the framework to which the tests were to be aligned. Overall, the judges reported confidence in using the combined approaches, in their judgements and in the recommended CEFR cut-scores, thus corroborating procedural validity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call