Abstract

PurposeTo develop a tool for the external and self-evaluation of residents in the Communicator, Collaborator, and Professional CanMEDS roles. MethodsAn academic teaching institution affiliated with 4 major urban hospitals conducted a survey that involved 46 residents and 216 hospital staff members. Residents selected at least 13 external evaluators from different categories (including physicians, nurses or technologists, peers or fellows, and support staff members) from their last 6 months of rotations. The external evaluators and residents answered 4 questions that pertained to each of the 3 CanMEDS roles being assessed. The survey results were analysed for feasibility, variance within and between rater groups, and the relationships between multisource and self-evaluation scores, and between multisource feedback and in-training evaluation report scores. ResultsThe multisource feedback survey had an overall response rate of 73% with 683 evaluations sent out to 216 unique evaluators. The ratings from different groups of evaluators were only weakly correlated. Residents were most likely to receive their best rating from a collaborating physician and their worst rating from a site secretary or a program assistant. Generally, self-assessment scores were significantly lower than multisource feedback scores. Although there was a strong correlation within the multisource feedback data and within the in-training evaluation report data, there was a weak correlation among the data sets. ConclusionsMultisource feedback provides useful feedback and scores that relate to critical CanMEDS roles that are not necessarily reflected in a resident's in-training evaluation report. The self-assessment feature of multisource feedback permits a resident to compare the accuracy of his or her assessments to improve their life-long learning skills.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call