Abstract

The development of a computerized adaptive test is considered a labor-intensive and time-consuming endeavor. This paper illustrates that this does not have to be the case—by demonstrating the steps taken, the decisions made, and the empirical results obtained during the development of three computerized adaptive tests (CATs) designed to measure student competencies in reading, mathematics, and science. The three tests had to be developed and piloted within an 18-month period, and they were used directly afterward in six research projects of a large nationwide research initiative. To ensure the sound psychometric quality of the CATs developed, the item calibration ( N = 1,632) followed several quality control procedures, including item fit analysis, differential item functioning analysis, and preoperational simulation studies. A CAT pilot study ( N = 1,093) and an additional CAT simulation confirmed the general usefulness of the constructed instru-ments. It is concluded that the development of CATs—including item calibration, simulations, and piloting within 18 months—is quite possible, even for comparably small development teams. This necessitates an available theoretical framework for the assessment and a sufficient number of items, specific plans for the item calibration, simulations, and a pilot study, as well as an information technology infrastructure for administering the tests.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.