Abstract

Twenty years ago, in Effectiveness and Efficiency, Archie Cochrane emphasized the importance of randomized controlled trials (RCTs) in guiding decisions about health care (Cochrane, 1972). Randomized trials are not always required to assess the effects of health care (the good and bad effects of some forms of health care are obvious), and sometimes trials are not feasible. But, for many forms of care, trials involving sufficient numbers of participants are essential to distinguish reliably between the effects of care and the effects of biases or chance. Just as important as conducting the trials, though, is disseminating the results through systematic reviews of the findings. Such reviews depend on the difficult task of identifying all relevant trials, and several ef f orts are going on internationally to coordinate this work. If people are to benefit from the results of trials, all the steps between research and practice must be accomplished effectively. Trials must be properly designed, conducted, analyzed, and reported. Their results must be assembled in systematic, up-to-date, and accessible reviews. The results of these reviews must be taken into account by decisionmakers, and finally, based on these decisions, there must be effective systems to audit how well local or national guidelines for health care are followed. Currently, weaknesses exist at all these steps. Cochrane drew attention to a particular weakness, however, when he criticized the medical profession for not having organized a system for producing up-to-date reviews of the results of RCTs. Experience gained over the past decade provides a useful basis for developing such a system (Chalmers, 1991). In particular, it has become clear that the same scientific principles that are applied to the design and conduct of primary research must also be applied to the process of reviewing that research (Mulrow, 1987; Haynes, 1991). Impressive examples now exist of the power of systematic reviews to provide reliable answers to important questions-for example, the effects of treatment on early breast cancer (Early Breast Cancer Trialists' Collaborative Group, 1992). Recent studies have shown that if systematic reviews, updated periodically, had been started at the beginning of a series of related trials, reliable recommendations for treatment would have been made earlier (Lau et al., 1992). Unsystematically conducted reviews in journals and textbooks have sometimes taken more than a decade to recommend treatments that a systematic review of trials would have shown to prevent premature death; in addition, other treatments have been endorsed long after evidence from trials had suggested that they were useless or actually harmful (Antman et al., 1992). The usual, unsystematic approach to reviewing the effects of care also increases the probability that resources will be wasted. For example, a systematic review of RCTs a decade ago would have shown that a short course of corticosteroids given to mothers expected to give birth prematurely substantially reduces the risk of neonatal morbidity and death (Crowley et al., 1990). Repeated failure to conduct, and apply the results of, systematic reviews of these trials has not only resulted in the unnecessary suffering of tens of thousands of babies but has also meant that neonatal care has been more expensive than it need have been (Mugford et al., 1991). Similarly, research funding bodies and ethics committees should be concerned about the extent to which resources are wasted on unnecessary research-for example, in repeated demonstrations of the protective effects of prophylactic antibiotics for some forms of surgery (Baum et al., 1981).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call