BackgroundMulticentre RCTs are widely used by critical care researchers to answer important clinical questions. However, few trials evaluating mortality outcomes report statistically significant results. We hypothesised that the low proportion of trials reporting statistically significant differences for mortality outcomes is plausibly explained by lower-than-expected effect sizes combined with a low proportion of participants who could realistically benefit from studied interventions. MethodsWe reviewed multicentre trials in critical care published over a 10-yr period in the New England Journal of Medicine, the Journal of the American Medical Association, and the Lancet. To test our hypothesis, we analysed the results using a Bayesian model to investigate the relationship between the proportion of effective interventions and the proportion of statistically significant results for prior distributions of effect size and trial participant susceptibility. ResultsFive of 54 trials (9.3%) reported a significant difference in mortality between the control and the intervention groups. The median expected and observed differences in absolute mortality were 8.0% and 2.0%, respectively. Our modelling shows that, across trials, a lower-than-expected effect size combined with a low proportion of potentially susceptible participants is consistent with the observed proportion of trials reporting significant differences even when most interventions are effective. ConclusionsWhen designing clinical trials, researchers most likely overestimate true population effect sizes for critical care interventions. Bayesian modelling demonstrates that that it is not necessarily the case that most studied interventions lack efficacy. In fact, it is plausible that many studied interventions have clinically important effects that are missed.