Abstract
ABSTRACTOne natural question about polytomous items (which yield responses which can be scored as ordered categories) concerns the information contained in the items; how much more information do polytomous items yield? Using the generalized partial credit IRT model, polytomous items from the 1991 field test of the NAEP Reading Assessment were calibrated with multiple choice and short open‐ended items. The expected information of each type of item was computed.On average, four‐category polytomous items yielded 2.1‐3.1 times as much IRT information as dichotomous items. These results provide limited support for the ad hoc rule of weighting k category polytomous items the same as k‐1 dichotomous items for computing total scores. Comparing average values, polytomous items provided more information across the entire proficiency range. Polytomous items provided the most information about examinees of moderately high proficiency; the information function peaked at 1.0 to 1.5, and the population distribution mean was 0. When scored dichotomously, information in the extended open‐ended items sharply decreased. However, they still provided more expected information than did the other response formats.For reference, a derivation of the information function for the generalized partial credit model is included.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have