Abstract

Meta‐analyses conventionally weight study estimates on the inverse of their error variance, in order to maximize precision. Unbiased variability in the estimates of these study‐level error variances increases with the inverse of study‐level replication. Here, we demonstrate how this variability accumulates asymmetrically across studies in precision‐weighted meta‐analysis, to cause undervaluation of the meta‐level effect size or its error variance (the meta‐effect and meta‐variance).Small samples, typical of the ecological literature, induce big sampling errors in variance estimation, which substantially bias precision‐weighted meta‐analysis. Simulations revealed that biases differed little between random‐ and fixed‐effects tests. Meta‐estimation of a one‐sample mean from 20 studies, with sample sizes of 3–20 observations, undervalued the meta‐variance by c. 20%. Meta‐analysis of two‐sample designs from 20 studies, with sample sizes of 3–10 observations, undervalued the meta‐variance by 15%–20% for the log response ratio (lnR); it undervalued the meta‐effect by c. 10% for the standardized mean difference (SMD).For all estimators, biases were eliminated or reduced by a simple adjustment to the weighting on study precision. The study‐specific component of error variance prone to sampling error and not parametrically attributable to study‐specific replication was replaced by its cross‐study mean, on the assumptions of random sampling from the same population variance for all studies, and sufficient studies for averaging. Weighting each study by the inverse of this mean‐adjusted error variance universally improved accuracy in estimation of both the meta‐effect and its significance, regardless of number of studies. For comparison, weighting only on sample size gave the same improvement in accuracy, but could not sensibly estimate significance.For the one‐sample mean and two‐sample lnR, adjusted weighting also improved estimation of between‐study variance by DerSimonian‐Laird and REML methods. For random‐effects meta‐analysis of SMD from little‐replicated studies, the most accurate meta‐estimates obtained from adjusted weights following conventionally weighted estimation of between‐study variance.We recommend adoption of weighting by inverse adjusted‐variance for meta‐analyses of well‐ and little‐replicated studies, because it improves accuracy and significance of meta‐estimates, and it can extend the scope of the meta‐analysis to include some studies without variance estimates.

Highlights

  • A meta-­analysis of an effect of interest serves to combine estimates of effect size from across studies, often for the purpose of achieving an overall estimate with more precision than can be obtained from any one study and more power for significance tests (Hedges & Pigott, 2001)

  • Meta-analysis of two-sample designs from 20 studies, with sample sizes of 3–10 observations, undervalued the meta-variance by 15%–20% for the log response ratio; it undervalued the meta-effect by c. 10% for the standardized mean difference (SMD). 3

  • For random-effects meta-analysis of SMD from little-replicated studies, the most accurate meta-estimates obtained from adjusted weights following conventionally weighted estimation of between-study variance

Read more

Summary

| INTRODUCTION

A meta-­analysis of an effect of interest serves to combine estimates of effect size from across studies, often for the purpose of achieving an overall estimate with more precision than can be obtained from any one study and more power for significance tests (Hedges & Pigott, 2001). Meta-­analyses conventionally weight each study i by the inverse of its observed error variance: 1∕vi for fixed-­effects tests, or1∕(vi + T2) for random-­effects tests, where T2 estimates the between-­study variance This weighting aims to minimize the variance in the meta-­estimate of effect size, thereby maximizing its precision (Hedges, 1981). The relative precision of the study B estimate set by vB/vA determines its probability of losing accuracy This 1:1 correspondence of precision with accuracy at the study level would apply to a meta-­estimate based on inverse-­variance weighting only if the population variance σ2 were estimated precisely by the sample variance s2 among observations. N-­weighting can provide an unbiased estimator of effect size, it has the considerable disadvantage of enforcing the same value of unity for all studies on and any other components of error variance not attributable to study-­specific replication, which generally rules out sensible estimation of a meta-­variance (Hedges, 1983). We use simulations to evaluate the adjustment against the conventional inverse-­variance weighting and n-­weighting

| MATERIALS AND METHODS
| DISCUSSION
Findings
DATA ACCESSIBILITY
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.