Abstract

This article focuses on the topic of how item response theory (IRT) scoring models reflect the intended content allocation in a set of test specifications or test blueprint. Although either an adaptive or linear assessment can be built to reflect a set of design specifications, the method of scoring is also a critical step. Standard IRT models employ a set of optimal scoring weights, and these weights depend on item parameters in the two-parameter logistic (2PL) and three-parameter logistic (3PL) models. The current article is an investigation of whether the scoring models reflect an intended set of weights defined as the proportion of item falling into each cell of the test blueprint. The 3PL model is of special interest because the optimal scoring weights depend on ability. Thus, the concern arises that for examinees of low ability, the intended weights are implicitly altered.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call