Abstract

Computerized Assisted Testing (CAT) has supported the development of numerous adaptive testing approaches. Such approach as Item Response Theory (IRT) estimates a student’s competency level by modeling a test as a function of the individual’s knowledge ability, and the parameters of the question (i.e. item). Multidimensional Item Response Theory (MIRT) extends IRT so that each item depends on multiple competency areas (i.e., knowledge dimensions). MIRT models consider two opposing types of relationship between knowledge dimensions: compensatory and noncompensatory. In a compensatory model, having a higher competency with one knowledge dimension compensates for having a lower competence in another dimension. Conversely, in a noncompensatory model all the knowledge dimensions are independent and do not compensate for each other. However, using only one type of relationship at a time restricts the use of MIRT in practice. In this work, we generalize MIRT to a mixed-compensation multidimensional item response theory (MCMIRT) model that incorporates both types of relationships. We also relax the MIRT assumption that each item must include every knowledge dimension. Thus, the MCMIRT can better represent real-world curricula. We show that our approach outperforms random item selection with synthetic data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.