Abstract

The present study explores the validity of a high-stakes university entrance exam and considers the role of gender as a source of bias in different subtests of this language proficiency test. To achieve this, the Rasch model was used to inspect biased items and to examine the construct-irrelevant factors. To obtain DIF analysis, the Rasch model was used for 5,000 participants who were selected randomly from a pool of examinees taking part in the National University Entrance Exam in Iran for Foreign Languages (NUEEFL) as a university entrance requirement for English language studies in 2015. The findings reveal that the test scores are not free from construct-irrelevant variance and some misfit items were modified based on the fit statistics suggestions. In sum, the fairness of the NUEEFL was not confirmed. The results obtained from such psychometric assessment could be beneficial for test designers, stake-holders, administrators, as well as teachers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call