Abstract

This article evaluates a sustained monologue speaking production test to validate its link to the CEFR model. The monologue test is a low-stakes production test that engages the test taker in sustained monologue tasks targeted at B2-C1 of the CEFR levels. The evaluation of the test included determining the extent to which the monologue speaking tasks and the single assessment criterion-related rating scale developed for the test are valid and reliably aligned to CEFR benchmarked descriptors. The socio-cognitive framework for test evaluation was adopted, and an explanatory sequential mixed-methods research design was implemented. The evaluation revealed some contentious points of contrast between the test items and the language demand that each item prompted in production. Consequently, selected items were improved or deleted to ensure the appropriate competency levelled at B2-C1 are correctly prompted. Additionally, the findings underlined the imperative need for test developers to adhere to five inter-related sets of procedures in the justification of a claim that the monologue speaking test is aligned to the CEFR. These include familiarisation, specification, standardisation and benchmarking, standard-setting, and validation. It emerged that thorough familiarity with the CEFR by test item writers and examiners is a fundamental requirement for a test closely related to CEFR construct and levels. Thus, familiarisation training of CEFR and its illustrative descriptors is a mandatory prerequisite for ensuring test items and assessment of the elicited production correspond to the levels and ratings described in the CEFR model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call