Abstract

A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students’ achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.

Highlights

  • Assessing students’ current level is the first step towards improving their academic skills

  • Qatar, and Yemen exhibited improper solutions under the 3PL item response theory (IRT) model, but we have presented the values for reference in Table 5 in any case

  • This indicates that number of mastered attributes can be considered a good indicator of general mathematical ability. The results of these above two analyses can be interpreted as evidence of the validity of the attributes we considered in this study. The findings of this current study for the Trends in International Mathematics and Science Study (TIMSS) 2007 mathematical assessment can be summarized as follows in light of our two objectives

Read more

Summary

Introduction

Assessing students’ current level is the first step towards improving their academic skills. Effective educational assessment is important because it helps inform students of the extent of their current knowledge, and can facilitate timely follow-up and support from teachers or parents [1]. One of the most familiar types of formal assessment is the achievement test, which measures what a student already knows or can do. One of the most famous and important set of models related to educational assessment, which are often used for high-stakes tests, is item response theory (IRT) [2]. Among these models, the 1–3 parameter logistic (1–3 PL) models [3] are rather popular. IRT models might not be appropriate for modeling numerous attributes, which is often needed for educational diagnosis

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.