Abstract

We thank Donato and Paladugu for their letter. They suggest that our assessment rating scales may have led to poor learner satisfaction and erroneous faculty judgment and ask us to abandon scales in favor of purely narrative data (e.g., “truly embrace the subjective”). Although the learner satisfaction and faculty judgments in our system have improved over time, we feel that any issues have less to do with rating type than with how we use the data. We agree that assessment and feedback are complex, and interventions meant to help can have the opposite effect.1,2 However, we do not accept that values created by rating scales must be seen as summative and objective, and that narrative data must be seen as formative and subjective. Narrative assessments do not inherently remove risk, as they may contain sensitive or detailed feedback and be used in a summative manner—just like numbers. Numerical ratings are not inherently more objective than narrative comments, particularly in workplace-based assessments, as they represent a “code” based on a variety of inputs. The reality is that learners may perceive any type of assessment as subjective and high-risk when used improperly. Numerical and narrative assessments represent a polarity, and rather than abandoning one for the other, we suggest maximizing the value of both.3 Training programs should develop support systems, such as longitudinal coaching, to help learners interpret and integrate all types of data. Coaches should personalize assessment data with a goal-directed approach, using feedback as the scaffold.1 In turn, coaches should be removed from making summative judgments, and this should be made explicit to learners.4 Data used for formative purposes should truly be low stakes, with no data point representing a threat to the learner. High-stakes decisions should be based on all available data (numerical and narrative) and should not be a surprise to learners or programs.4 Learners should coproduce these programs of assessment with faculty members. Finally, all forms of assessment should be supported by validity evidence. Do the data help learners become better over time? Why do we need words and numbers? Although it may be possible to judge improvement over time using narrative only, it is a difficult thing to do. Numbers tell the story quickly and imperfectly, and narratives do so more slowly, but also imperfectly. Together, they tell a better story than either one can alone. How we listen to and use both assessment methods matters most. Eric J. Warm, MDProfessor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; @CincyIM; [email protected]; ORCID: https://orcid.org/0000-0002-6088-2434.Benjamin Kinnear, MD, MEdAssistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.Matthew Kelleher, MD, MEdAssistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.Dana Sall, MD, MEdAssistant professor of medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.Eric Holmboe, MDSenior vice president, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois, adjunct professor of medicine, Yale University, New Haven, Connecticut, and adjunct professor, Feinberg School of Medicine at Northwestern University, Chicago, Illinois.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call