Abstract

BackgroundRandomized controlled trials in orthopaedics are powered to mainly find large effect sizes. A possible discrepancy between the estimated and the real mean difference is a challenge for statistical inference based on p-values. We explored the justifications of the mean difference estimates used in power calculations. The assessment of distribution of observations in the primary outcome and the possibility of ceiling effects were also assessed.MethodsSystematic review of the randomized controlled trials with power calculations in eight clinical orthopaedic journals published between 2016 and 2019. Trials with one continuous primary outcome and 1:1 allocation were eligible. Rationales and references for the mean difference estimate were recorded from the Methods sections. The possibility of ceiling effect was addressed by the assessment of the weighted mean and standard deviation of the primary outcome and its elaboration in the Discussion section of each RCT where available.Results264 trials were included in this study. Of these, 108 (41 %) trials provided some rationale or reference for the mean difference estimate. The most common rationales or references for the estimate of mean difference were minimal clinical important difference (16 %), observational studies on the same subject (8 %) and the ‘clinical relevance’ of the authors (6 %). In a third of the trials, the weighted mean plus 1 standard deviation of the primary outcome reached over the best value in the patient-reported outcome measure scale, indicating the possibility of ceiling effect in the outcome.ConclusionsThe chosen mean difference estimates in power calculations are rarely properly justified in orthopaedic trials. In general, trials with a patient-reported outcome measure as the primary outcome do not assess or report the possibility of the ceiling effect in the primary outcome or elaborate further in the Discussion section.

Highlights

  • Randomized controlled trials in orthopaedics are powered to mainly find large effect sizes

  • We investigate the rationale behind the choice of the mean difference (MD) estimates (MDest) in randomized controlled trial (RCT) published in eight orthopaedic journals between 2016 and 2019

  • Of the 505 RCTs identified, 264 RCTs reported the power calculation for one continuous primary outcome, and these were included in the analysis (Fig. 1)

Read more

Summary

Introduction

Randomized controlled trials in orthopaedics are powered to mainly find large effect sizes. MCIDs are established for common patient-reported outcome measures (PROMs) in the hope of achieving alignment between patient values and outcome measures by expressing patient-level change in comparison to health status. These estimates are mean estimates of patient-level change in PROM scores that are anchored to an external question of change in health status or of the distribution of these change scores or both [2,3,4], expert panels could formulate as well as a foundation for realistic size of MD estimate [5]. Bias between the estimated and the observed mean difference in outcome assessment may occur if there is a mismatch and the MCID estimate can not be generalized to all follow-up time-points

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call