Abstract

The Nedelsky standard setting procedure utilizes an option elimination strategy to estimate the probability that a minimally competent candidate (MCC) will answer a multiple-choice item correctly. The purpose of this study was to investigate the accuracy of predicted item performance from the Nedelsky ratings. The results indicate that test taking behavior of MCCs does not match the underlying Nedelsky assumption that MCCs randomly guess among the options judges believe should be attractive. Further, the accuracy of predicted item performance appears to vary as a function of item difficulty and content domain. However, an analysis of the relationship between judges' rating of distractor difficulty and proportion of examinees selecting item distractors indicated that useful information about examinee item performance is obtainable from Nedelsky-based judgments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call