Abstract

Providing medical trainees with meaningful feedback is a critical and essential condition for learning. Informative, constructive, and behaviorally anchored feedback reinforces good clinical practice, shapes emerging skills, and identifies knowledge gaps. However, providing meaningful feedback can be challenging, particularly when it requires reporting “unsatisfactory” performance in medical students and residents. Many clinical supervisors, particularly in smaller programs, find it difficult to provide belowaverage scores to poorly performing students and residents [1]. A survey of 10 American medical schools found that 74.5 % of clinical faculty reported an “unwillingness to record negative evaluations” as a significant problem in accurately capturing student performance [2]. Without focused feedback, underperforming students are less likely to receive the necessary guidance to remediate poor performance in knowledge and skills. Over the past decade, repeated calls for medical education reform have highlighted the need to assist learners in developing specific knowledge and skills [4]. The implementation of competency-based medical education (CBME) has necessitated a reexamination of methods for evaluation and feedback to medical trainees. The focus of CBME is to provide more learner-centered education whereby educators are expected to directly observe trainees and provide context-specific, behaviorally based feedback to learners. Rather than using time-based criteria (e.g., participation in a 2-month clinical rotation) to determine readiness for practice, learners are required to demonstrate observable and measurable competencies specific to the medical discipline. Medical educators have started to critically examine whether current evaluation methods will meet the stringent assessment requirements inherent in CBME. Current assessment frameworks rely heavily on quantitative evaluation tools such as numerical rating scales and grades. The utilization of global rating scales that classify learners as being “above” or “below” an expected level fails to provide specific, descriptive feedback about how to improve performance. Quantitative assessment tools possess many methodological strengths; however, there are significant epistemological limitations within quantitative assessment tools that must be considered to best support learners (see Table 1). Medical education researchers have called for more effective ways to capture learners’ progression throughout the medical education continuum [3, 5], leading to a growing appreciation for qualitative evaluation methods. Clinical competence is highly contextual and dependent on many factors within the learning environment. Reducing such contextually driven performance to a numerical score discounts the complexity of the clinical exchange and fails to provide the specificity necessary for change. While many numerically based rating forms include sections for written comments, they are often populated with broad, generic statements such as “good job” or “continue to work on communication.” Such comments do not provide educators or learners with enough information to address specific learning difficulties. The CBME paradigm requires a shift from numerically driven evaluation systems to assessment methods that descriptively capture impressions of learners within specific clinical environments [3, 5]. * Meghan M. McConnell mcconn@mcmaster.ca

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call