Abstract

Computer-based testing systems take advantage of the interaction between computers and individuals to sequentially customize the presented test items to the test-taker’s ability estimate. Administering such sequential adaptive tests has many benefits including personalized tests, accurate measureme nt, item security, and substantial cost reduction. However, the design of such intelligent tests is a complex process and it is important to explore the impact of various parameters and options on the performance before switching from traditional tests in a particular environment. Although Monte Carlo simulation is a typical tool for achieving this purpose, it depends on generating pseudo-random samples, which may fail to effectively represent the environment under study and thus incorrect inferences can be drawn. This paper presents a comprehensive case study to evaluate and compare the performance of a number of sequential adaptive testing procedures but using post-hoc simulation, where items of a real conventional test are re-administered adaptively. The comparisons are based on the number of administered items, standard error of measurement, item exposure rates, and correlation between adaptive and non-adaptive estimates. It is found that the results varies based on the settings. However, Bayesian estimation with adaptive item selection can lead to greater savings in terms of the number of test items without jeopardizing the estimated ability. It also has the lowest average exposure rate for each item.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call