Abstract

The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In this study, we focus on a promising method for detecting and conveying person fit for large-scale educational assessments. This method uses multilevel logistic regression (MLR) to model the slopes of the person response functions, a potential source of person misfit for IRT models. We apply the method to a representative sample of students who took the writing section of the SAT (N = 19,341). The findings suggest that the MLR approach is useful for providing supplemental evidence of model–data fit in large-scale educational test settings. MLR can be useful for detecting general misfit at global and individual levels. However, as with other model–data fit indices, the MLR approach is limited in providing information regarding only some types of person misfit.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call