Abstract

Entrustable professional activities (EPAs) are an emerging workplace-based, patient-oriented assessment approach with limited empirical evidence. To measure the development of pediatric trainees' clinical skills over time using EPA-based assessment data. Prospective cohort study of categorical pediatric residents over 3 academic years (2015-2016, 2016-2017, and 2017-2018) assessed on 17 American Board of Pediatrics EPAs. Residents in training at 23 pediatric residency programs in the Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network were included. Assessment was conducted by clinical competency committee members, who made summative assessment decisions regarding levels of supervision required for each resident and each EPA. Data were collected from May 2016 to November 2018 and analyzed from November to December 2018. Longitudinal, prospective assessment using EPAs. Trajectories of supervision levels by EPA during residency training and how often graduating residents were deemed ready for unsupervised practice in each EPA. Across the 5 data collection cycles, 1987 residents from all 3 postgraduate years in 23 residency programs were assigned 25 503 supervision level reports for the 17 general pediatrics EPAs. The 4 EPAs that required the most supervision across training were EPA 14 (quality improvement) on the 5-level scale (estimated mean level at graduation, 3.7; 95% CI, 3.6-3.7) and EPAs 8 (transition to adult care; mean, 7.0; 95% CI, 7.0-7.1), 9 (behavioral and mental health; mean, 6.6; 95% CI, 6.5-6.6), and 10 (resuscitate and stabilize; mean, 6.9; 95% CI, 6.8-7.0) on the expanded 5-level scale. At the time of graduation (36 months), the percentage of trainees who were rated at a supervision level corresponding to "unsupervised practice" varied by EPA from 53% to 98%. If performance standards were set to align with 90% of trainees achieving the level of unsupervised practice, this standard would be met for only 8 of the 17 EPAs (although 89% met this standard for EPA 17, performing the common procedures of the general pediatrician). This study presents initial evidence for empirically derived practice readiness and sets the stage for identifying curricular gaps that contribute to discrepancy between observed practice readiness and standards needed to produce physicians able to meet the health needs of the patient populations they serve. Future work should compare these findings with postgraduation outcomes data as a means of seeking validity evidence.

Highlights

  • The 4 Entrustable professional activities (EPAs) that required the most supervision across training were EPA 14 on the 5-level scale and EPAs 8, 9, and 10 on the expanded 5-level scale

  • If performance standards were set to align with 90% of trainees achieving the level of unsupervised practice, this standard would be met for only 8 of the 17 EPAs ( 89% met this standard for EPA 17, performing the common procedures of the general pediatrician)

  • In the United States, the primary graduate medical education (GME) model is the milestones developed by the Accreditation Council for Graduate Medical Education and the American Board of Medical Specialties.[1]

Read more

Summary

Introduction

Medical education throughout much of the world has been transitioning to a competency-based medical education (CBME) system.[1,2,3,4,5] The core tenet of CBME is that individuals advance through training at variable rates when they demonstrate sufficient skill in a defined set of competencies, rather than at predetermined points during the training process.[6,7,8] In this framework, competencies provide a common understanding of the desired outcomes of training for learners and teachers, while informing curricula and assessment across the continuum of medical education.[4,9,10,11] Over the past 2 decades, competency models have been developed and implemented to variable extents throughout the world.[1,2,3,12] In the United States, the primary graduate medical education (GME) model is the milestones developed by the Accreditation Council for Graduate Medical Education and the American Board of Medical Specialties.[1] Since 2014, the Accreditation Council for Graduate Medical Education has required GME training programs to periodically measure and report ratings of specialty-specific competency milestones for individual trainees, but does not yet require the milestones be used to make advancement decisions.[1,13,14]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call