Abstract

Three extant methods of adapting the length of computer-based mastery tests are described and compared: 1) the sequential probability ratio test (SPRT), 2) Bayesian use of the beta distribution, and 3) adaptive mastery testing based on item response theory (IRT). The utility of the SPRT has been empirically demonstrated by Frick [1]. Research has also demonstrated the effectiveness of use of the beta function in the Minnesota Adaptive Instructional System by Tennyson et al. [2]. Considerably more empirical research has been conducted on IRT-based approaches [3]. No empirical studies were found in which these three approaches have been directly compared. As a first step, computer simulations were undertaken to compare the accuracy and efficiency of these approaches in making mastery and nonmastery decisions. Results indicated that the IRT-based approach was more accurate when simulated examinee ability levels were clustered near the cut-off. On the other hand, when ability levels were more widely dispersed—as would likely be the case in pre- and posttest situations in mastery learning—all three approaches were comparably accurate. While the IRT approach tended to be the most efficient, it is the least practical to implement in typical classroom testing situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call