Abstract

BackgroundChoosing or altering the planned statistical analysis approach after examination of trial data (often referred to as ‘p-hacking’) can bias the results of randomised trials. However, the extent of this issue in practice is currently unclear. We conducted a review of published randomised trials to evaluate how often a pre-specified analysis approach is publicly available, and how often the planned analysis is changed.MethodsA review of randomised trials published between January and April 2018 in six leading general medical journals. For each trial, we established whether a pre-specified analysis approach was publicly available in a protocol or statistical analysis plan and compared this to the trial publication.ResultsOverall, 89 of 101 eligible trials (88%) had a publicly available pre-specified analysis approach. Only 22/89 trials (25%) had no unexplained discrepancies between the pre-specified and conducted analysis. Fifty-four trials (61%) had one or more unexplained discrepancies, and in 13 trials (15%), it was impossible to ascertain whether any unexplained discrepancies occurred due to incomplete reporting of the statistical methods. Unexplained discrepancies were most common for the analysis model (n = 31, 35%) and analysis population (n = 28, 31%), followed by the use of covariates (n = 23, 26%) and the approach for handling missing data (n = 16, 18%). Many protocols or statistical analysis plans were dated after the trial had begun, so earlier discrepancies may have been missed.ConclusionsUnexplained discrepancies in the statistical methods of randomised trials are common. Increased transparency is required for proper evaluation of results.

Highlights

  • Choosing or altering the planned statistical analysis approach after examination of trial data can bias the results of randomised trials

  • Cro et al BMC Medicine (2020) 18:137. Guidelines such as ICH-E9 [25] (International Conference for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use), SPIRIT [26, 27] (Standard Protocol Items: Recommendations for Interventional Trials), and CONSORT [28] (Consolidated Standards of Reporting Trials) require investigators to pre-specify the principle features of their statistical analysis approach in the trial protocol and report any changes in the trial report

  • Search strategy In this review, we examined randomised controlled trials published between January and April 2018 in six general high-impact medical journals: Annals of Internal Medicine, The BMJ, Journal of the American Medical Association (JAMA), The Lancet, New England Journal of Medicine (NEJM), and PLOS Medicine

Read more

Summary

Introduction

Choosing or altering the planned statistical analysis approach after examination of trial data (often referred to as ‘p-hacking’) can bias the results of randomised trials. Cro et al BMC Medicine (2020) 18:137 Guidelines such as ICH-E9 [25] (International Conference for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use), SPIRIT [26, 27] (Standard Protocol Items: Recommendations for Interventional Trials), and CONSORT [28] (Consolidated Standards of Reporting Trials) require investigators to pre-specify the principle features of their statistical analysis approach in the trial protocol and report any changes in the trial report. This strategy can reduce bias from analysis being chosen based on trial data and allows readers to assess whether inappropriate changes were made. Guidelines for the content of statistical analysis plans have been published [6]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call