Abstract
ABSTRACTPeriodic review of resident performance is an important aspect of residency training. Amongst allopathic residency programs, it is expected that the performance of resident physicians which can be grouped based on the ACGME core competencies, be assessed so as to allow for effective feedback and continuous improvement. Review of monthly evaluation forms for residents in the core ACGME programs at Marshall University and the University of Toledo demonstrated a wide spread in the number of Likert questions that faculty were asked to complete. This number ranged from a low of 7 in Surgery to a high of 65 in Psychiatry (both Marshall Programs). Correlation and network analysis were performed on these data. High degrees of correlations were noted between answers to questions (controlled for each resident) on these forms at both institutions. In other words, although evaluation scores varied tremendously amongst the different residents in all the programs studied, scores addressing different competencies tended to be very similar for the same resident, especially in some of the programs which were studied. Network analysis suggested that there were clusters of questions that produced essentially the same answer for a given resident, and these clusters were bigger in some of the different residency program assessment forms. This seemed to be more the rule in the residency programs with large numbers of Likert questions. The authors suggest that reducing the number of monthly questions used to address the core competencies in some programs may be possible without substantial loss of information.
Highlights
Evaluation of post graduate or resident performance has been occurring since the development of such programs in the early 20th century following the famous ‘Flexner’ report calling out for improvements in the education of physicians [1]
The evaluation of post-graduate physician trainees in ACGME programs has undergone an infusion of rigor and structure during the past decades
Some of this can be attributed to the introduction of the six core competencies approximately 20 years ago [9]
Summary
Evaluation of post graduate or resident performance has been occurring since the development of such programs in the early 20th century following the famous ‘Flexner’ report calling out for improvements in the education of physicians [1]. Practice-based learning and improvement) form the root of most evaluation systems [2,4,5], specialty evaluations have been allowed to, if not encouraged to, expand to query along far more axes of evaluation. This is not to say that American Board of Medical Specialties (ABMS) residency review committees (RRC) have insisted on longer evaluation forms; they have not. Virtually all of these RRCs have increased the granularity of competency assessment, and incorporation into monthly evaluation forms would seem to logically follow. As most residency evaluation forms have incorporated the concepts of milestones and PGYspecific evaluations, these forms which are filled out by faculty (as well as other learners and, in some cases, other members of the health care team) have become more complex
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.