Abstract
The effect sizes of studies included in a meta‐analysis do often not share a common true effect size due to differences in for instance the design of the studies. Estimates of this so‐called between‐study variance are usually imprecise. Hence, reporting a confidence interval together with a point estimate of the amount of between‐study variance facilitates interpretation of the meta‐analytic results. Two methods that are recommended to be used for creating such a confidence interval are the Q‐profile and generalized Q‐statistic method that both make use of the Q‐statistic. These methods are exact if the assumptions underlying the random‐effects model hold, but these assumptions are usually violated in practice such that confidence intervals of the methods are approximate rather than exact confidence intervals. We illustrate by means of two Monte‐Carlo simulation studies with odds ratio as effect size measure that coverage probabilities of both methods can be substantially below the nominal coverage rate in situations that are representative for meta‐analyses in practice. We also show that these too low coverage probabilities are caused by violations of the assumptions of the random‐effects model (ie, normal sampling distributions of the effect size measure and known sampling variances) and are especially prevalent if the sample sizes in the primary studies are small.
Highlights
IntroductionMeta‐analysis refers to a set of statistical techniques for combining the estimates of similar studies providing
Meta‐analysis refers to a set of statistical techniques for combining the estimates of similar studies providingThe preparation of this article was supported by grant 406‐13‐050 from the Netherlands Organization for Scientific Research (NWO).commensurable evidence about some phenomenon of interest
We created heat maps to gain further insight into whether there is a specific set of conditions for k, τ, πCi, nEi, and nCi for which the coverage probability substantially diverges from the nominal coverage rate
Summary
Meta‐analysis refers to a set of statistical techniques for combining the estimates of similar studies providing. Commensurable evidence about some phenomenon of interest (eg, the effectiveness of a treatment, the size of a group difference, or the strength of the association between two variables). If the included studies in a meta‐analysis share the same common true effect size, any differences between the studies' effect size estimates are in theory only caused by sampling variability. The true effect sizes can vary and sampling variability alone can not explain the differences in effect size estimates. The effect sizes are said to be heterogeneous Such between‐ study variance may be due to systematic differences between the studies (eg, differences in the sample characteristics or differences in the length or dose of a treatment). If information on how the studies differ is available, it may be possible to account for the between‐ study variance by incorporating this information in the model with a meta‐regression analysis.[1]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.