Abstract

ObjectiveEmpirical research that cannot be reproduced using the original dataset and software code (replication files) creates a credibility challenge, as it means those published findings are not verifiable. This study reports the results of a research audit exercise, known as the push button replication project, that tested a sample of studies that use similar empirical methods but span a variety of academic fields.MethodsWe developed and piloted a detailed protocol for conducting push button replication and determining the level of comparability of these replication findings to original findings. We drew a sample of articles from the ten journals that published the most impact evaluations from low- and middle-income countries from 2010 through 2012. This set includes health, economics, and development journals. We then selected all articles in these journals published in 2014 that meet the same inclusion criteria and implemented the protocol on the sample.ResultsOf the 109 articles in our sample, only 27 are push button replicable, meaning the provided code run on the provided dataset produces comparable findings for the key results in the published article. The authors of 59 of the articles refused to provide replication files. Thirty of these 59 articles were published in journals that had replication file requirements in 2014, meaning these articles are non-compliant with their journal requirements. For the remaining 23 of the 109 articles, we confirmed that three had proprietary data, we received incomplete replication files for 15, and we found minor differences in the replication results for five.ConclusionThe findings presented here reveal that many economics, development, and public health researchers are a long way from adopting the norm of open research. Journals do not appear to be playing a strong role in ensuring the availability of replication files.

Highlights

  • In May 2015, two of us, as part of the Replication Program of the International Initiative for Impact Evaluation, convened a group of critics, supporters and others with an interest in replication research for a one-day consultation event in Washington, DC on replication research for international development

  • The authors of 59 of the articles refused to provide replication files. Thirty of these 59 articles were published in journals that had replication file requirements in 2014, meaning these articles are non-compliant with their journal requirements

  • For the remaining 23 of the 109 articles, we confirmed that three had proprietary data, we received incomplete replication files for 15, and we found minor differences in the replication results for five

Read more

Summary

Introduction

In May 2015, two of us, as part of the Replication Program of the International Initiative for Impact Evaluation, convened a group of critics, supporters and others with an interest in replication research for a one-day consultation event in Washington, DC on replication research for international development. All present agreed that this kind of reproduction is the most basic replication question Some argued that this expectation should be a given–that original authors always have the data and code to reproduce their work. Others expressed strong doubts about how frequently authors really can provide the required materials to reproduce the published findings. These doubters argued that replication research should focus, at least initially, on this very first line of verification. Empirical research on this kind of verification supports the views of the doubters. We provide new empirical evidence on whether journal publications of experimental and quasi-experimental studies of interventions in low- and middleincome countries can be verified in this way

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call