Abstract

In regional anesthesia, the efficacy of novel blocks is typically evaluated using randomized controlled trials (RCTs), the findings of which are aggregated in systematic reviews and meta-analyses. Systematic review authors frequently point out the small sample size of RCTs as limiting conclusions from this literature. We sought to determine via statistical simulation if small sample size could be an expected property of RCTs focusing on novel blocks with typical effect sizes. We simulated the conduct of a series of RCTs comparing a novel block versus placebo on a single continuous outcome measure. Simulation analysis inputs were obtained from a systematic bibliographic search of meta-analyses. Primary outcomes were the predicted number of large trials (empirically defined as N ≥ 256) and total patient enrollment. Simulation analysis predicted that a novel block would be tested in 16 RCTs enrolling a median of 970 patients (interquartile range (IQR) across 1000 simulations: 806, 1269), with no large trials. Among possible modifications to trial design, decreasing the statistical significance threshold from p < 0.05 to p < 0.005 was most effective at increasing the total number of patients represented in the final meta-analysis, but was associated with early termination of the trial sequence due to futility in block vs. block comparisons. Small sample size of regional anesthesia RCTs comparing novel block to placebo is a rational outcome of trial design. Feasibly large trials are unlikely to change conclusions regarding block vs. placebo comparisons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call