Abstract

Clinical trials that stop early for benefit have a treatment difference that overestimates the true effect. The consequences of this fact have been extensively debated in the literature. Some researchers argue that early stopping, or truncation, is an important source of bias in treatment effect estimates, particularly when truncated studies are incorporated into meta-analyses. Such claims are bound to lead some systematic reviewers to consider excluding truncated studies from evidence synthesis. We therefore investigated the implications of this strategy by examining the properties of sequentially monitored studies conditional on reaching the final analysis. As well as estimation bias, we studied information bias measured by the difference between standard measures of statistical information, such as sample size, and the actual information based on the conditional sampling distribution. We found that excluding truncated studies leads to underestimation of treatment effects and overestimation of information. Importantly, the information bias increases with the estimation bias, meaning that greater estimation bias is accompanied by greater overweighting in a meta-analysis. Simulations of meta-analyses confirmed that the bias from excluding truncated studies can be substantial. In contrast, when meta-analyses included truncated studies, treatment effect estimates were essentially unbiased. Previous analyses comparing treatment effects in truncated and non-truncated studies are shown not to be indicative of bias in truncated studies. We conclude that early stopping of clinical trials is not a substantive source of bias in meta-analyses and recommend that all studies, both truncated and non-truncated, be included in evidence synthesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call