Abstract

ABSTRACTWith multiple options to choose from, there is always a chance of lucky guessing by examinees on multiple-choice (MC) items, thereby potentially introducing bias in item difficulty estimates. Correct responses by random guessing thus pose threats to the validity of claims made from test performance on an MC test. Under the Rasch framework, the current study investigates the effects of removing responses with likely guessing on item difficulty estimates, person ability measures, and test information function (i.e., a function of measurement precision for person ability) on an MC language proficiency test. Results show that removing responses with likely guessing leads to difficult items becoming more difficult and high-performing examinees receiving higher ability measures, because lucky guesses are more likely to appear in responses to difficult items than to easy items; therefore, removing responses with likely guessing results in lower proportion correct (i.e., higher item difficulty) for difficult items. More importantly, the current study shows that the measurement precision for high-performing examinees increases after accounting for likely random guessing, while the measurement precision for low- and medium-performing examinees remains similar with and without likely guessing. Implications for operational scoring of examinees are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.