Abstract

This article focuses on the topic of how item response theory (IRT) scoring models reflect the intended content allocation in a set of test specifications or test blueprint. Although either an adaptive or linear assessment can be built to reflect a set of design specifications, the method of scoring is also a critical step. Standard IRT models employ a set of optimal scoring weights, and these weights depend on item parameters in the two-parameter logistic (2PL) and three-parameter logistic (3PL) models. The current article is an investigation of whether the scoring models reflect an intended set of weights defined as the proportion of item falling into each cell of the test blueprint. The 3PL model is of special interest because the optimal scoring weights depend on ability. Thus, the concern arises that for examinees of low ability, the intended weights are implicitly altered.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.