Abstract

To test the criterion validity of existing standardized-patient (SP)-examination scores using global ratings by a panel of faculty-physician observers as the gold-standard criterion; to determine whether such ratings can provide a reliable gold-standard criterion to be used for validity-related research; and to encourage the use of these gold-standard ratings for validation research and examination development, including scoring and standard setting, and for enhancing understanding of the clinical competence construct. Five faculty physicians independently observed and rated videotaped performances of 44 students from one medical school on the seven SP cases that make up the fourth-year assessment administered at The Morchand Center of Mount Sinai School of Medicine to students in the eight member schools in the new York City Consortium. The validity coefficients showed correlations between scores on the examination and the overall ratings ranging from .60 to .70. The reliability coefficients for ratings of overall examination performance reached the commonly recommended .80 level and were very close at the case level, with interrater reliabilities generally in the .70 to .80 range. The results are encouraging, with validity coefficients high enough to warrant optimism about the possibility of increasing them to the recommended .80 level, based on further studies to identify those measurable performance characteristics that most reflect the gold-standard ratings. The high interrater reliabilities indicate that faculty-physician ratings of performance on SP cases and examinations can or may be able to provide a reliable gold standard for validating and refining SP assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call