Abstract

Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as the Egger's regression test, the rank correlation test, and the Trim-and-Fill test. Previous research has demonstated that the Egger's regression test is miscalibrated when applied to log-odds ratio effect size estimates, because of artifactual correlation between the effect size estimate and its standard error. This study examines similar problems that occur in meta-analyses of the standardized mean difference, a ubiquitous effect size measure in educational and psychological research. In a simulation study of standardized mean difference effect sizes, we assess the Type I error rates of conventional tests of funnel plot asymmetry, as well as the likelihood ratio test from a three-parameter selection model. Results demonstrate that the conventional tests have inflated Type I error due to the correlation between the effect size estimate and its standard error, while tests based on either a simple modification to the conventional standard error formula or a variance-stabilizing transformation both maintain close-to-nominal Type I error.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call