Abstract

ABSTRACT Screening items for parameter drift helps protect against serious validity threats and ensure score comparability when equating forms. Although many high-stakes credentialing examinations operate with small sample sizes, few studies have investigated methods to detect drift in small sample equating. This study demonstrates that several newly researched drift detection strategies can improve equating accuracy under certain conditions with small samples where some anchor items display item parameter drift. Results showed that the recently proposed methods mINFIT and mOUTFIT as well as the more conventional Robust-z helped mitigate the adverse effects of drifting anchor items in conditions with higher drift levels or with more than 75 examinees. In contrast, the Logit Difference approach excessively removed invariant anchor items. The discussion provides recommendations on how practitioners working with small samples can use the results to make more informed decisions regarding item parameter drift.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.