Abstract

In a cognitive diagnostic computerized adaptive testing (CD-CAT) exam, an item pool that consists of items with calibrated item parameters is used for item selection and attribute estimation. The parameter estimates for the items in the item pool are often treated as if they were the true population parameters, and therefore, the calibration errors are ignored. The purpose of this study was to investigate the effects of calibration errors on the attribute classification accuracy, the measurement precision of attribute mastery classification, and the test information under the log-linear cognitive diagnosis model (LCDM) framework. The deterministic input, noisy “and” gate (DINA) model and the compensatory re-parameterized unified model (C-RUM) were used in fixed-length CD-CAT simulations. The results showed that high levels of calibration errors were associated with low classification accuracy, low test information, and misleading estimation of measurement precision. The effects of calibration errors decreased as the test length increased, and the DINA model appeared to be more vulnerable in the presence of calibration errors. The C-RUM was less influenced by calibration errors because of its additive characteristics in the LCDM framework. The same conclusions applied when item exposure control was incorporated and when different item selection methods were used. Finally, the use of a larger calibration sample size to calibrate the item pool was found to reduce the magnitudes of error variances and increase the attribute classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call