Abstract

Dependent effect sizes are ubiquitous in meta-analysis. Using Monte Carlo simulation, we compared the performance of 2 methods for meta-regression with dependent effect sizes-robust variance estimation (RVE) and 3-level modeling-with the standard meta-analytic method for independent effect sizes. We further compared bias-reduced linearization and jackknife estimators as small-sample adjustments for RVE and Wald-type and likelihood ratio tests for 3-level models. The bias in the slope estimates, width of the confidence intervals around those estimates, and empirical type I error and statistical power rates of the hypothesis tests from these different methods were compared for mixed-effects meta-regression analysis with one moderator either at the study or at the effect size level. All methods yielded nearly unbiased slope estimates under most scenarios, but as expected, the standard method ignoring dependency provided inflated type I error rates when testing the significance of the moderators. Robust variance estimation methods yielded not only the best results in terms of type I error rate but also the widest confidence intervals and the lowest power rates, especially when using the jackknife adjustments. Three-level models showed a promising performance with a moderate to large number of studies, especially with the likelihood ratio test, and yielded narrower confidence intervals around the slope and higher power rates than those obtained with the RVE approach. All methods performed better when the moderator was at the effect size level, the number of studies was moderate to large, and the between-studies variance was small. Our results can help meta-analysts deal with dependency in their data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call