Abstract

Residency program staff have the important obligation to organize regular evaluation of residents by faculty members and of both faculty and rotations by residents. Lack of regular and constructive feedback makes it difficult for residents and faculty members to evaluate their own progress, which may lessen their motivation and lower their subsequent performance (1). Furthermore, evaluations provide the residency program director and the department chair with valuable information for recognizing outstanding faculty members and residents and for identifying potential problems in the program or its participants that may benefit from remediation. Finally, the Accreditation Council for Graduate Medical Education (ACGME) requires that all residency programs include such evaluations, and some programs have been cited—and even placed on probation—for failure to do so. Several factors tend to reduce faculty and resident participation in evaluations. The evaluation instruments are frequently complex and time consuming (2), and residents and faculty members may question whether their input will produce constructive change. In addition, some residents and faculty members may be reluctant to submit negative evaluations of others’ performance for fear of retaliatory, negative evaluations of their own performance. Some measurements of resident and faculty performance are quantitative and therefore are considered objective; in-training examination scores, number of publications, and contact hours between faculty members and residents are examples. A comprehensive system of evaluation, however, also includes “subjective” components— dimensions of behavior considered important to overall performance that are listed and qualitatively measured on a rating form. Crucial aspects of a resident’s performance, such as consultative skills, professionalism, and eagerness to learn, are difficult to evaluate by any means other than subjective ones. Such data can come only from the people who directly observe that resident’s work. This article describes the effort made by the staff of a large residency program in diagnostic radiology (at the Indiana University School of Medicine, Department of Radiology) to improve the system of resident and faculty evaluation. The methods used and the results achieved are, to some degree, specific to this particular institution. We believe, however, that they also illustrate important lessons that may benefit other institutions, which might adopt or modify these tools as they see fit.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call