Abstract

To assess how new National Board of Medical Examiners (NBME) performance examinations--computerbased case simulations (CBX) and standardized patient exams (SPX)-compare with each other and with traditional internal and external measures of medical students' performances. Secondary objectives examined attitudes of students toward new and traditional evaluation modalities. Fourth-year students (n = 155) at the University of California, Los Angeles, School of Medicine (including joint programs at Charles R. Drew University of Medicine and Science and University of California, Riverside) were assigned two days of performance examinations (eight SPXs, ten CBXs, and a self-administered attitudinal survey). The CBX was scored by the NBME and the SPX by a NBME/Macy consortium. Scores were linked to the survey and correlated with archival student data, including traditional performance indicators (licensing board scores, grade-point averages, etc.). Of the 155 students, 95% completed the testing. The CBX and the SPX had low to moderate statistically significant correlations with each other and with traditional measures of performance. Traditional measures were intercorrelated at higher levels than with the CBX or SPX. Students' perceptions of the various evaluation methods varied based on the assessment. These findings are consistent with the theoretical construct for development of performance examinations. For example, to assess clinical decision making, students rated the CBX best, while they rated multiple-choice examinations best to assess knowledge. Examination results and student perception studies provide converging evidence that performance examinations measure different physician competency domains and support using multipronged assessment approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call