Abstract

ABSTRACT A common practical challenge is how to assign ability estimates to all incorrect and all correct response patterns when using item response theory (IRT) models and maximum likelihood estimation (MLE) since ability estimates for these types of responses equal −∞ or +∞. This article uses a simulation study and data from an operational K − 12 computerized adaptive test (CAT) to compare how well several alternatives – including Bayesian maximum a priori (MAP) estimators; various MLE based methods; and assigning constants – work as strategies for computing ability estimates for extreme scores in vertically scaled fixed-length Rasch-based CATs. Results suggested that the MLE-based methods, MAP estimators with prior standard deviations of 4 and above, and assigning constants achieved the desired outcomes of producing finite ability estimates for all correct and all incorrect responses that were more extreme than the MLE values of students that got one item correct or one item incorrect as well as being more extreme than the difficulty of the items students saw during the CAT. Additional analyses showed that it is possible for some methods to exhibit changes in how much they differ in magnitude and variability from the MLE comparison values or the b values of the CAT items for all correct versus all incorrect responses and across grades. Specific discussion is given to how one may select a strategy to assign ability estimates to extreme scores in vertically scaled fixed-length CATs that employ the Rasch model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call