Abstract

To determine the reproducibility of psychological meta-analyses, we investigated whether we could reproduce 500 primary study effect sizes drawn from 33 published meta-analyses based on the information given in the meta-analyses, and whether recomputations of primary study effect sizes altered the overall results of the meta-analysis. Results showed that almost half (k = 224) of all sampled primary effect sizes could not be reproduced based on the reported information in the meta-analysis, mostly because of incomplete or missing information on how effect sizes from primary studies were selected and computed. Overall, this led to small discrepancies in the computation of mean effect sizes, confidence intervals and heterogeneity estimates in 13 out of 33 meta-analyses. We provide recommendations to improve transparency in the reporting of the entire meta-analytic process, including the use of preregistration, data and workflow sharing, and explicit coding practices.

Highlights

  • The ever-increasing growth of scientific publication output [1] has increased the need for -and use of- systematic reviewing of evidence

  • We considered effect sizes reported in 33 randomly chosen meta-analytic articles from psychology, and searched for the corresponding primary study articles to examine whether we could recompute 500 effect sizes reported in the meta-analytic articles; we refer to this as primary study effect size reproducibility

  • We considered if correcting any errors in these primary study effect sizes affected main meta-analytic outcomes

Read more

Summary

Introduction

The ever-increasing growth of scientific publication output [1] has increased the need for -and use of- systematic reviewing of evidence. Meta-analysis involves a set of procedural and statistical techniques to arrive at an overall effect size estimate, and can be used to inspect whether study outcomes differ systematically based on particular study characteristics [2]. Metaanalysts often need to perform complex computations to synthesize primary study results, which increases the risk of faulty data handling and erroneous estimates [5]. When these decisions and calculations are not carefully undertaken and reported, the methodological quality of the meta-analysis cannot be assessed [6,7]. Reproducibility (i.e., reanalyzing the data by following reported procedures and arriving at the same result) is undermined by reporting errors and by inaccurate, inconsistent, or biased decisions in calculating effect sizes

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call