Abstract

ABSTRACT Screening items for parameter drift helps protect against serious validity threats and ensure score comparability when equating forms. Although many high-stakes credentialing examinations operate with small sample sizes, few studies have investigated methods to detect drift in small sample equating. This study demonstrates that several newly researched drift detection strategies can improve equating accuracy under certain conditions with small samples where some anchor items display item parameter drift. Results showed that the recently proposed methods mINFIT and mOUTFIT as well as the more conventional Robust-z helped mitigate the adverse effects of drifting anchor items in conditions with higher drift levels or with more than 75 examinees. In contrast, the Logit Difference approach excessively removed invariant anchor items. The discussion provides recommendations on how practitioners working with small samples can use the results to make more informed decisions regarding item parameter drift.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call