Abstract

Over 2 years ago, I wrote an editorial in Medical Decision Making titled ‘‘Time to Retire the 1in-X Risk Format.’’ In that piece, I reviewed some of the research that had compared risk statistics presented using the 1-in-X format (e.g., a 1 in 12 chance of a birth defect) versus other presentations with numerators greater than 1 (e.g., a 10 in 120 chance). Examining the findings of both a paper by Pighin and others that appeared in that issue as well as others’ work, I concluded that the evidence supported the conclusion that 1-in-X formats reduced patient understanding of risk information and tended to increase risk perceptions. Based on that evidence, I chose to make a strong argument against the use of the 1-in-X format in clinical practice. To cite the previous editorial, ‘‘We need to move the conversation about 1-in-X formats past mere documentation of problems and address the significant need to change clinical practice . . . . any continued use of 1-in-X formats to communicate medical risk is . . . intolerable.’’ In the current issue of this journal, Sirota and others report the results of 5 separate studies that further explore the 1-in-X effect. After first finding both significant and nonsignificant findings in individual studies, they pursued a meta-analytical approach that aggregated both their data and data from Pighin and others. Their main findings are that the set of available data provides ‘‘decisive evidence’’ that use of the 1-in-X format does indeed affect risk perceptions but also that the effect is smaller than most others have estimated it. To start, we should applaud the authors for recognizing the value in replication studies. A growing recent body of research in both psychology and medicine has documented evidence of publication bias, and many seemingly well-implemented studies are proving difficult to replicate. While some journals have historically appeared less interested in publishing replication studies, the editors of Medical Decision Making explicitly seek to recognize the potential scientific value of such work. In this case, the authors’ attention to detail also allowed them to disentangle the 1-in-X effect from several plausible counterhypotheses as well as examine the robustness of the effect in multiple populations. While Sirota and others’ data suggest that 1in-X effects may be smaller than what earlier studies suggested, what is essential about their finding is not its recalibration of our beliefs about the size of the 1in-X effect but its methodological rigor in proving that such an effect exists at all. When determining whether a research finding warrants a change in clinical practice, however, effect size only matters if there are reasons to maintain the status quo beyond simple inertia. Examples of such reasons include increased cost (either long-term or transitional) and new negative consequences such as novel side effects. Yet, such is not the case here. There is no extra cost to using percentages, frequencies with larger fixed denominators, or other approaches in clinical risk communications. Clinicians and educators simply have to decide that they will say or write one thing instead of another. As a result, we can make a strong recommendation for change based on the decisive evidence of Sirota and others’ paper even if effect sizes are comparatively small. Received 10 November 2013 from the Department of Health Behavior and Health Education, University of Michigan, Ann Arbor, MI, USA (BJZ-F); Division of General Medicine, Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA (BJZ-F); Center for Bioethics and Social Sciences in Medicine, University of Michigan, Ann Arbor, MI, USA (BJZ-F); and Risk Science Center, University of Michigan, Ann Arbor, MI, USA (BJZ-F). Revision accepted for publication 18 November 2013.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call