Abstract

We compared core pediatric clerkship student assessments across 11 geographically distinct learning environments following a major curriculum change. We sought to determine if intersite consistency existed, which can be used as a marker of program evaluation success.We evaluated students' overall pediatric clerkship performance along with individual assessments that target our clerkship learning objectives. Using the data of graduating classes from 2015 to 2019 (N = 859), we conducted an analysis of covariance and multivariate logistic regression analysis to investigate whether the performance varied across training sites.Of the students, 833 (97%) were included in the study. The majority of the training sites did not show statistically significant differences from each other. After controlling for the Medical College Admission Test total score and the average pre-clerkship National Board of Medical Examiners final exam score, the clerkship site only explained a 3% additional variance of the clerkship final grade.Over the ensuing 5-year period after a curriculum overhaul to an 18-month, integrated module pre-clerkship curriculum, we found that student pediatric clerkship performance in clinical knowledge and skills did not differ significantly across 11 varied geographic teaching sites when controlling for students' pre-clerkship achievement. Specialty-specific curriculum resources, faculty development tools, and assessment of learning objectives may provide a framework for maintaining intersite consistency when faced with an expanding network of teaching facilities and faculty.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call