Abstract

The Common European Framework of Reference (CEFR) is intended as a reference document for language education including assessment. This article describes a project that investigated whether the CEFR can help test developers construct reading and listening tests based on CEFR levels. If the CEFR scales together with the detailed description of language use contained in the CEFR are not sufficient to guide test development at these various levels, then what is needed to develop such an instrument? The project methodology involved gathering expert judgments on the usability of the CEFR for test construction, identifying what might be missing from the CEFR, developing a frame for analysis of tests and specifications, and examining a range of existing test specifications and guidelines to item writers and sample test tasks for different languages at the 6 levels of the CEFR. Outcomes included a critical review of the CEFR, a set of compilations of CEFR scales and of test specifications at the different CEFR levels, and a series of frameworks or classification systems, which led to a Web-mounted instrument known as the Dutch CEFR Grid. Interanalyst agreement in using the Grid for analyzing test tasks was quite promising, but the Grids need to be improved by training and discussion before decisions on test task levels are made. The article concludes, however, that identifying separate CEFR levels is at least as much an empirical matter as it is a question of test content, either determined by test specifications or identified by any content classification system or grid.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call