Abstract

This article provides both logical and empirical evidence to justify the use of an item‐mapping method for establishing passing scores for multiple‐choice licensure and certification examinations. After describing the item‐mapping standard‐setting process, the rationale and theoretical basis for this method are discussed, and the similarities and differences between the item‐mapping and the Bookmark methods are also provided. Empirical evidence supporting use of the item‐mapping method is provided by comparing results from four standard‐setting studies for diverse licensure and certification examinations. The four cut score studies were conducted using both the item‐mapping and the Angoff methods. Rating data from the four standard‐setting studies, using each of the two methods, were analyzed using item‐by‐rater random effects generalizability and dependability studies to examine which method yielded higher inter‐judge consistency. Results indicated that the item‐mapping method produced higher inter‐judge consistency and achieved greater rater agreement than the Angoff method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call