Abstract

Large-scale assessments are generally designed for summative purposes to compare achievement among participating countries. However, these nondiagnostic assessments have also been adapted in the context of cognitive diagnostic assessment for diagnostic purposes. Following the large amount of investments in these assessments, it would be cost-effective to draw finer-grained inferences about the attribute mastery. Nonetheless, the correctness of attribute specifications in the Q-matrix has not been verified, despite being designed by domain experts. Furthermore, the underlying process of TIMSS (Trends in International Mathematics and Science Study) assessment is unknown as it was not developed for diagnostic purposes. Thus, this study suggests an initial validating attribute specifications in the Q-matrix and thereafter defining specific reduced or saturated models for each item. In doing so, the two analyses were validated across 20 countries that were selected randomly for TIMSS 2011 data. Results show that attribute specifications can differ from expert opinions and the underlying model for each item can vary.

Highlights

  • A recent popular psychometric model, called cognitive diagnosis model (CDM), in contrast to classical test theory (CTT) and item response theory (IRT), aims to mainly investigate a specific finer-grained set of multiple skills within a domain of interest

  • In the DINO model, mastery of even one of the required attributes would be enough to answer the item correctly. There is another type of reduced models, which are additive in nature for instance, the additive CDM (A-CDM), the linear logistic model (LLM), and the R-RUM, with different link functions have the cumulative probability of success associated with one attribute that has an independent impact from other attributes

  • After validating the current attribute specifications given in the Q-matrix, the second study evaluated model-data fit at the item level

Read more

Summary

Introduction

A recent popular psychometric model, called cognitive diagnosis model (CDM), in contrast to classical test theory (CTT) and item response theory (IRT), aims to mainly investigate a specific finer-grained set of multiple skills within a domain of interest. Another example is that the TIMSS data have been analyzed using one of the commonly used reduced models, the DINA model, as highlighted by Lee et al (2011), Lee et al (2013), Choi et al (2015), and Sen and Arıcan (2015) While carrying out these types of relevant analyses, CDMs typically assume that the test was developed based on specific attributes and a Q-matrix (Tatsuoka, 1983), which relates test items to particular attributes. Chen et al (2013) demonstrated using 26 released items in readingdomain of the Program for International Student Assessment (PISA), administered in 2000; initial attributes were defined by domain experts, followed by statistical analyses based on absolute and relative fit indices After redefining those initial attributes and Q-matrix specifications, the selected Q-matrix was evaluated across reduced CDMs. the results were investigated using data from different countries. The third purpose is to validate results across 20 countries that were selected randomly

Background
Method
Statistical Procedures
Results
Summary and Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call