Abstract

BackgroundClinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic.MethodsWe review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity.ResultsDiffering results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses.ConclusionsExplaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim.

Highlights

  • Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis

  • In order to compliment the Inconsistency reference intervals, the 18 trials are coloured according to their assigned Q statistic status; green for no heterogeneity (Q 0.1) and red for substantial heterogeneity (p-value < 0.1)

  • Henmi and Copas [38] have recently advocated an interesting compromise; to use the fixed effects point estimate θFE - that is robust to small study effects - but surrounded by a confidence interval derived under the random effects model

Read more

Summary

Introduction

Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. As shown in a 2005 review of the clinical research literature [1], it is still most common to meta-analyse results across clinical studies using the inverse variance approach, to yield a ‘fixed’ or ‘common’ effect estimate. Regardless of whether the meta-analysis is based on IPD or aggregate data, substantial statistical heterogeneity between studies may still remain. Unlike Q, I2 is designed to be independent of the number of trials constituting the meta-analysis and independent of the outcome’s scale, so it can be compared across meta-analyses. It is reported as standard, with or without Cochran’s Q

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call