Abstract

Surgical clerkships frequently include oral exams to assess students' ability to critically analyze data and utilize clinical judgment during common scenarios. Limited guidance exists for the interpretation of oral exam score validity, thus making improvements difficult to target. We examined the development, administration, and scoring of a clerkship oral exam from a validity evidence framework. This was a retrospective study of a third-year, end-of-clerkship oral exam in obstetrics and gynecology (OBGYN). Content, response process, internal structure, and relationship to other variables validity evidence was collected and evaluated for 5 versions of the oral exam. Albert Einstein College of Medicine, Bronx, New York City. Participants were 186 third-year medical students who completed the OBGYN clerkship in the academic year 2020 to 2021. The average number of objectives assessed per oral exam version were uniform, but the distribution of questions per Bloom's level of cognition was uneven. Student scores on all questions regardless of Bloom's level of cognition were >87%, and reliability (Cronbach's alpha) of item scores varied from 0.58 to 0.74. There was a moderate, positive correlation (Spearman's rho) between the oral exam scores and national shelf exam scores (0.35). There were low correlations between oral exam scores and (a) clinical performance ratings (0.14) and (b) formal presentation scores (-0.19). This study provides an example of how to examine the validity of oral exam scores for targeted improvements. Further modifications are needed before using scores for high stakes decisions. The authors provide recommendations for additional sources of validity evidence to collect in order to better meet the goals of any surgical clerkship oral exam.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call