Abstract

ABSTRACTTestlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for latent ability, which violates the assumption of conditional independence made by traditional item response theory models. The present study used Monte Carlo simulation methods to evaluate the effects of testlet dependency on item and person parameter recovery and classification accuracy. Three calibration models were examined, including the traditional 2PL model with marginal maximum likelihood estimation, a testlet model with Bayesian estimation, and a bi-factor model with limited-information weighted least squares mean and variance adjusted estimation. Across testlet conditions, parameter types, and outcome criteria, the Bayesian testlet model outperformed, or performed equivalently to, the other approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call