Abstract

The purpose of the present research was to develop general guidelines to assist practitioners in setting up operational computerized adaptive testing (CAT) sys tems based on the graded response model. Simulated data were used to investigate the effects of systematic manipulation of various aspects of the CAT procedures for the model. The effects of three major variables were examined: item pool size, the stepsize used along the trait continuum until maximum likelihood estima tion could be calculated, and the stopping rule em ployed. The findings suggest three guidelines for graded response CAT procedures: (1) item pools with as few as 30 items may be adequate for CAT; (2) the variable-stepsize method is more useful than the fixed- stepsize methods; and (3) the minimum-standard-error stopping rule will yield fewer cases of nonconverg ence, administer fewer items, and produce higher cor relations of CAT θ estimates with full-scale estimates and the known θs than the minimum-information stop ping rule. The implications of these findings for psy chological assessment are discussed. Index terms: computerized adaptive testing, graded response model, item response theory, polychotomous scoring.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call