Abstract

The current study compared the model fit indices, skill mastery probabilities, and classification accuracy of six Diagnostic Classification Models (DCMs): a general model (G-DINA) against five specific models (LLM, RRUM, ACDM, DINA, and DINO). To do so, the response data to the grammar and vocabulary sections of a General English Achievement Test, designed specifically for cognitive diagnostic purposes from scratch, was analyzed. The results of the test-level-model fit values obtained strong evidence in supporting the G-DINA and LLM models possessing the best model fit. In addition, the ACDM and RRUM were almost very identical to that of the G-DINA. The value indices of the DINO and DINA models were very close to each other but larger than those of the G-DINA and LLM. The model fit was also investigated at the item level, and the results revealed that model selection should be performed at the item level rather than the test level, and most of the specific models might perform well for the test. The findings of this study suggested that the relationships among the attributes of grammar and vocabulary are not ‘either-or’ compensatory or non-compensatory but a combination of both.

Highlights

  • Diagnostic Classification Models (DCMs) are considered as paramount modeling alternatives for dealing with response data in the presence of multiple postulated latent skills, which can cause multivariate classifications of respondents (Rupp &Templin, 2008)

  • The results of this study revealed that the Additive CDM (ACDM) possessed the closest affinity to the GDINA model in view of the model fit and skill classification profiles; in contrast, the RRUM, DINA, and DINO models showed dissimilar results regarding both models fit statistics and skill mastery properties with the ACDM and Generalized DINA Model (G-DINA) models

  • The results showed that the ACDM would be the best model in terms of the model fit

Read more

Summary

Introduction

Diagnostic Classification Models (DCMs) are considered as paramount modeling alternatives for dealing with response data in the presence of multiple postulated latent skills, which can cause multivariate classifications of respondents (Rupp &Templin, 2008). Shafipoor et al Language Testing in Asia (2021) 11:33 language ability (Lee & Sawaki, 2009a), and classifying learners into similar skill mastery groups (Hartz SM: A Bayesian framework for the unified model for assessing cognitive abilities: blending theory with practicality, unpublished). Different models with their statistical packages have been developed and applied so far. Since choosing the right model will make a difference in the classification of test-takers, it should be performed cautiously (Lee & Sawaki, 2009b)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call