RANDOMIZED CONTROLLED TRIALS (RCTS) ARE CONsidered the most robust source of scientific evidence to inform the medical community about the benefits and risks of therapeutic interventions. In recommendations for practitioners, treatment guidelines recognize the special value of RCTs by designating such studies as the highest level of evidence in assessing the efficacy of various therapeutic strategies. However, despite the acknowledged importance of RCTs, all randomized trials are not equivalent in reliability, credibility, and value. Every trial has limitations that can compromise the study’s interpretability and undermine the strength of its conclusions. In extreme cases, a poor-quality RCT can lead to important patient and societal harms. In this issue of JAMA, the report by Lamas et al of the Trial to Assess Chelation Therapy (TACT) represents a situation in which many important limitations in the design and execution of a clinical trial compromise the reliability of the study and render the results difficult to interpret. Unfortunately, the efforts of these investigators fell short of the minimum level of quality necessary to adequately answer the question they sought to investigate. Nonetheless, all RCTs should be published because even failed trials provide valuable scientific lessons for the medical community. Accordingly, TACT provides useful insights into the overwhelming challenges faced when trying to determine the effectiveness of an unusual and controversial therapy. The evolution of clinical trial design over the past 4 decades is based on the principle that a high-quality RCT must effectively minimize bias and variability. Bias is reduced by randomization of patients to alternative treatment strategies, blinding (masking) of all participants (patients and caregivers) to the treatment assignment, and use of an intention-to-treat approach that analyzes patients in their originally assigned treatment group. High levels of patient retention are essential to maintain the integrity of randomization. Validity in clinical trials is enhanced by selecting a sample size large enough to adequately test the hypothesis and through central adjudication of important and objective patient outcomes. Execution of a high-quality RCT requires skilled investigators and study coordinators who understand these critical scientific principles. For TACT, more than 60% of patients were randomized at enrolling centers described as complementary and alternative medicine sites. Many of these centers have websites that describe their services, which include an array of unproven therapies ranging from stem cell therapy to regrow breasts after mastectomy, high-dose intravenous vitamin C to treat cancer, and use of cinnamon for treating diabetes to treatment of influenza with antimicrobial essential oils or homeopathic remedies (while warning patients not to undergo immunization). Other sites offer chelation to treat or cure a variety of conditions including autism in children. A common theme of these centers is evident—they appear to attempt to appeal to vulnerable patients who have challenging diseases by offering a variety of unscientific and unproven therapies. Whether a highquality RCT can be performed at such sites is questionable. Not surprisingly, with a high fraction of such study sites, TACT showed some important deviations from adherence to the scientific principles of a well-controlled trial. The study randomized 1708 patients, but 311 (18%) were lost to followup, nearly all because of withdrawal of consent (289 patients), and importantly, these withdrawals were not equally distributed between the treatment groups. Significantly more patients (n=174) withdrew from the placebo group compared with the chelation group (n=115; hazard ratio, 0.66; P=.001). A similar imbalance in discontinuation from randomized treatment was observed—281 in the placebo group and 233 in the chelation group. In some RCTs, more patients stop study treatment in the active treatment group because of toxicity or adverse drug effects. However, in TACT, why would patients differentially withdraw in such large numbers from the placebo group? A logical explanation is unmasking of treatment assignments. If either the investigators or the patients knew who was receiving chelation, patients assigned to the placebo group would likely be influenced to withdraw or stop study treatment, particularly when some investigators were advocates for chelation therapy.
Read full abstract