Abstract
INTRODUCTION Evaluating resident operative skill acquisition is a common challenge for all surgical specialties. The Operative Entrustability Assessment (OEA) is a validated assessment tool that facilitates compliance with the Accreditation Council for Graduate Medical Education’s Next Accreditation System and documents resident operative performance at point-of-care.1 The OEA and other operative rating tools have been implemented in surgical training programs across the United States in an effort to incorporate objective, valid, and reliable operative skills assessments into surgical training.2 A recent multi-institutional qualitative study on resident feedback needs raised questions on the reliability of operative skills feedback when this feedback is given more than a week after the case date.3 Using our experience with the OEA, we assessed the reliability of evaluation scores according to the timeliness of feedback completion. METHODS We extracted evaluator and self-assessment scores from all logged cases since OEA implementation at our institution. We defined OEA score reliability as the correlation between self-assessment and evaluator scores. This correlation has been shown in previous studies to be positive, moderate to strong, and statistically significant.4,5 We used paired t test to compare scores and Pearson’s correlation coefficient to assess reliability stratifying by time-to-evaluation completion divided into quintiles (Q1: 0, Q2: 1–3, Q3: 4–13, Q4: 4–38, and Q5: > 38 days after surgery). We used likelihood ratio tests on linear regression to assess the interaction between reliability and timeliness to completion. RESULTS Between September 2013 and October 2016, 1778 complete OEAs were logged. Mean resident self-assessment (3.41 ± 1.09) was slightly higher than evaluator score (3.37 ± 0.99; P = 0.048). Overall, self-assessment score was significantly and strongly correlated with evaluator score [Pearson’s correlation coefficient (r) = 0.72; P < 0.001]. Stratified by delay-to-completion, correlation coefficients were roughly similar for evaluations completed within 0 days (r = 0.77; P < 0.001), 1–3 days (r = 0.73; P < 0.001), and 4–13 days after surgery (r = 0.70; P < 0.001). Although still statistically significant, this correlation was only moderately positive for evaluations entered within 14–38 days (r = 0.60; P < 0.001) or over 38 days (r = 0.52; P < 0.001) after surgery. We found strong evidence for an interaction between the time to completion of OEA scores and OEA evaluator score reliability (P < 0.001). CONCLUSIONS Our data support the reliability of OEA evaluator scores completed until 2 weeks from the case, with significantly decreased reliability associated with delayed completion. This represents a useful refinement in the interpretation of evaluation scores that is crucial as we move toward competency-based accreditation in surgical specialties. ACKNOWLEDGMENTS Michael Cohen assisted with data extraction for Operative Entrustability Assessment data. He was not compensated for this contribution. We thank the residents and faculty at the Johns Hopkins School of Medicine Department of Plastic and Reconstructive Surgery in making use of and continuously providing feedback on the Operative Entrustability Assessment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.