Abstract

Abstract Most language proficiency exams in Europe are presently developed so that reported scores can be related to the Common European Framework of Reference for Languages (CEFR; (Council of Europe. 2001. Common European framework of reference for languages: learning, teaching, assessment. Cambridge: Cambridge University Press.). Before any CEFR linking process can take place, such tests should be shown to be both valid and reliable, as “if an exam is not valid or reliable, it is meaningless to link it to the CEFR [and] a test that is not reliable cannot, by definition, be valid” (Alderson, Charles J. 2012. Principles and practice in language testing: compliance or conflict? Presentation at TEA SIG Conference: Innsbruck. http://tea.iatefl.org/inns.html (accessed May 2017).). In the test development process, tasks developed based on test specifications must therefore be piloted in order to check that test items perform as predicted. The present article focuses on the statistical analysis of test trial data provided by the piloting of three B1 listening tasks carried out at the University of Granada’s Modern Language Center (CLM). Here, results from a detailed Rasch analysis of the data showed the test to be consistently measuring a unidimensional construct of listening ability. In order to confirm that the test contains items at the correct difficulty level, teacher judgements of candidates’ listening proficiency were also collected. The test was found to separate A2 and B1 candidates well; used in conjunction with the establishment of appropriate cut scores, the reported score can be considered an accurate representation of CEFR B1 listening proficiency. The study demonstrates how Rasch measurement can be used as part of the test development process in order to make improvements to test tasks and hence create more reliable tests

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call