Abstract

Cognitive diagnosis models (CDMs) are confirmatory latent class models that provide fine-grained information about skills and cognitive processes. These models have gained attention in the last few years because of their usefulness in educational and psychological settings. Recently, numerous developments have been made to allow for the implementation of cognitive diagnosis computerized adaptive testing (CD-CAT). Despite methodological advances, CD-CAT applications are still scarce. To facilitate research and the emergence of empirical applications in this area, we have developed the cdcatR package for R software. The purpose of this document is to illustrate the different functions included in this package. The package includes functionalities for data generation, model selection based on relative fit information, implementation of several item selection rules (including item exposure control), and CD-CAT performance evaluation in terms of classification accuracy, item exposure, and test length. In conclusion, an R package is made available to researchers and practitioners that allows for an easy implementation of CD-CAT in both simulation and applied studies. Ultimately, this is expected to facilitate the development of empirical applications in this area.

Highlights

  • The need to evaluate multiple discrete dimensions led to the proposal of a family of item response models known as cognitive diagnosis models (CDMs)

  • The cognitive diagnosis computerized adaptive testing (CD-CAT) methodology has emerged to combine the efficiency of adaptive applications with the fine-grained diagnostic output of CDMs [4]

  • The cdcatR package was developed in a manner analogous to what had been done previously in the context of traditional item response theory with the catR [45] and mirtCAT [46] packages

Read more

Summary

Theoretical Background

We can distinguish between two types of evaluations. The first is summative evaluation. This same interest in ordering respondents on a continuum is present in other areas of psychology, for example, in personnel selection This type of assessment has usually been evaluated from the classical test theory or the traditional item response theory. The first challenge for CD-CAT was the adaptation of existing procedures developed for the traditional item response theory framework, with continuous latent variables, to the CDM framework, with discrete latent variables. This resulted in the proposal of multiple item selection rules [4,5,6]. A general discussion will be presented, and lines of future work will be discussed

The cdcatR Package
Starting Point
Item Selection Rule
Scoring Method
Termination Criterion
Data Generation Using the Package
Illustration
Pattern
Findings
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call