Abstract

The future of test construction for certain psychological ability domains that can be analyzed well in a structured manner may lie—at the very least for reasons of test security—in the field of automatic item generation. In this context, a question that has not been explicitly addressed is whether it is possible to embed an item response theory (IRT) based psychometric quality control procedure directly into the process of automatic item generation. Research in this area was conducted using 2 item generators (for the 2 domains of reasoning and spatial ability) that were developed and based on relevant models of cognitive psychology. During the course of the 4 studies reported here, those parts of the generators that check for possible violations of psychometric quality ("constraints") were improved. The main findings indicate that quality control procedures can be embedded in automatic item generators depending on (a) the degree to which the domain to be measured can be structured; (b) item-specific, content-based analyses; and (c) the degree to which the constraints can be implemented in software. Furthermore, beyond the global check of given model fit via IRT, the content-based analysis of items may be valuable in terms of finding such item properties that may lead to violations of psychometric quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call