Abstract

The results of multicenter clinical trials may differ across participating clinical sites. We present a diagnostic approach for evaluating this diversity that emphasizes the relationship between the observed event rates and treatment effects. We use as an example a trial of sequential strategies of Pneumocystis prophylaxis in human immunodeficiency virus infection with 842 patients randomly allocated to start prophylaxis with trimethoprim/sulfamethoxazole, dapsone, or pentamidine. Prophylaxis failure rates varied significantly across the 30 clinical sites (0–30.3%, p = 0.002 by Fisher’s exact test) with prominent variability in the pentamidine arm (0–63.6%). Starting with oral regimens was better than starting with pentamidine in sites with high rates of events, whereas the three strategies had more similar efficacy in other sites. Sites enrolling fewer patients had lower event rates and had more patients who withdrew prematurely or were lost to follow-up. In a hierarchical regression model adjusting for random measurement error in the observed event rates, starting with trimethoprim/sulfamethoxazole was predicted to be increasingly better than starting with aerosolized pentamidine as the risk of prophylaxis failure increased ( p = 0.01), reducing the risk of failure by 47% when the failure rate of pentamidine was 30%, whereas the two regimens were predicted to be equivalent when the failure rate was 17%. Differences in event rates could reflect a combination of heterogeneity in diagnosis, administration of treatments, and disease risk in patients across sites. The evaluation of clinical site differences with a systematic approach focusing on event rates may give further insight in the interpretation of the results of multicenter trials beyond an average treatment effect. Control Clin Trials 1999;20:253–266

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call