T he term ‘‘evidence-based medicine’’ (EBM) was introduced in 1992 in a seminal paper by Gordon Guyatt as a solution to ‘‘an exploding volume of literature . . . deepening concern about burgeoning medical costs, and increasing attention to quality and outcomes.’’ Over the ensuing decades, EBM has been integrated into the medical culture and incorporated almost universally into medical school and residency curricula. In addition, Guyatt’s recognition of the need to reduce health care costs and improve quality has entered the mainstream consciousness, framed increasingly around the notion of ‘‘value.’’ Value can be conceptualized as the ratio of health outcomes and costs. Skills in EBM are critical to optimizing value, since a deep understanding of evidence is required for predicting health outcomes in individual patients. In particular, clinicians must recognize the clinical impact of interventions, grapple with uncertainty in the evidence, and uncover bias in published studies in order to fully balance the benefits and harms of potential approaches. More than 20 years of EBM immersion should have thoroughly prepared us for these tasks—but has it? The study by Caverly et al in this issue of the Journal of Graduate Medical Education suggests that EBM education has failed to prepare physicians for high-value practice. The authors presented medical residents and attending internal medicine physicians with 4 vignettes that described drug studies with different types of endpoints: total mortality, diseasespecific mortality, a surrogate outcome (simply called a ‘‘risk factor’’ in the vignette), and a composite outcome with a surrogate component. Participants were asked to rate the extent to which each study proved that the new drug ‘‘might help people.’’ Improvement in the composite outcome, as proof of drug benefit, was rated most highly by both residents and attending physicians. While participants were not asked to directly compare endpoints, fewer than half rated all-cause mortality as better proof of benefit than improvement in a surrogate endpoint, and fewer than a quarter of participants rated all-cause mortality as better proof than a composite endpoint. Despite limitations in this study approach, the findings suggest that physicians lack the skill to accurately weigh the relative importance of different types of endpoints in clinical trials, and they tend to overvalue surrogate and composite endpoints. The overvaluing of surrogate and composite endpoints threatens health care value, because improvements in surrogate endpoints may occur without improvement (or with worsening) of clinical outcomes. For example, class 1C antiarrhythmic agents were routinely prescribed to post-myocardial infarction patients with asymptomatic ventricular arrhythmias after myocardial infarction for arrhythmia suppression, until the Cardiac Arrhythmia Suppression Trial found that these drugs actually increased mortality compared to a placebo. Use of dual angiotensin-converting enzyme inhibitor and angiotensin receptor blocker therapy for a variety of indications grew rapidly based on the possible benefit in surrogate outcomes (eg, proteinuria in nephropathy) until complications such as hypotension and hyperkalemia were clarified. In both of these cases, prescribing based on surrogate outcomes likely harmed large numbers of patients. Further, since pharmaceutical industry marketing is often based on surrogate outcomes, a failure of physicians to recognize the limitations of surrogate outcomes may facilitate successful industry marketing of new expensive (and possibly minimally effective) drugs, resulting in reduced value for patients. Why, despite EBM education, are physicians unable to appreciate the greater value of a reduction in mortality compared to an improvement in a surrogate outcome? First, evaluating the appropriateness of endpoints is not adequately emphasized in EBM education. Despite the ubiquitous ‘‘PICO’’ structure for clinical questions, with ‘‘O’’ representing the outcome of interest, there is little instruction in the relative weight of different outcomes, and the complexity of composite outcomes defies simple explanation. Instruction in the applicability of evidence to patient care includes consideration of whether all clinically relevant outcomes were reportDOI: http://dx.doi.org/10.4300/JGME-D-15-00570.1
Read full abstract