Abstract

Previous simulation studies of computerized adaptive tests (CATs) have revealed that the validity and precision of proficiency estimates can be maintained when review opportunities are limited to items within successive blocks. Our purpose in this study was to evaluate the effectiveness of CATs with such restricted review options in a live testing setting. Vocabulary CATs were compared under four conditions: (a) no item review allowed, (b) review allowed only within successive 5‐item blocks, (c) review allowed only within successive lO‐item blocks, and (d) review allowed only after answering all 40 items. Results revealed no trust‐worthy differences among conditions in vocabulary proficiency estimates, measurement error, or testing time. Within each review condition, ability estimates and number correct scores increased slightly after review, more answers were changed from wrong to right than from right to wrong, most examinees who changed answers improved proficiency estimates by doing so, and nearly all examinees indicated that they had an adequate opportunity to review their previous answers. These results suggest that restricting review opportunities on CATs may provide a viable way to satisfy examinee desires, maintain validity and measurement precision, and keep testing time at acceptable levels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call