Abstract

The increasing number of tests being developed has prompted more people to investigate the association between test items and skill attributes and state of knowledge, spurring the development of the cognitive diagnosis models. Several studies have predominantly adopted the Mantel–Haenszel (MH) method to detect differential item functioning (DIF) under such models. Jin et al. (2018) used the odds ratio (OR) method to examine DIF under the Rasch model, which assumed latent traits were continuous. It was found that that the OR method outperformed the traditional MH method in terms of type I error rate control and statistical power. However, no studies have applied the OR method in DIF detection under the cognitive diagnosis models. Therefore, this study investigated the effectiveness of DIF detection methods, including the MH method, MH method with purification procedure, MH method with attribute patterns as the matching variables, OR method, and OR method with purification procedure. According to the results, the effectiveness of DIF detection was affected by sample size and the proportion of DIF items; specifically, a large sample size and a high proportion of DIF items were associated with increased and decreased statistical power, respectively. The purification procedure enhanced the DIF detection effectiveness and reduced the type I error rate in both the OR and MH methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call