Abstract

BackgroundThe widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically.Methods and FindingsWe related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.ConclusionsOur findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.

Highlights

  • Statistical analyses of research data are quite error prone [1,2,3], accounts of statistical results may be inaccurate [4], and decisions that researchers make during the analytical phase of a study may lean towards the goal of achieving a preferred result [5,6,7,8]

  • Our findings on the basis of psychological papers suggest that statistical results are hard to verify when reanalysis is more likely to lead to contrasting conclusions

  • In the summer of 2005, Wicherts and colleagues [12] contacted the corresponding authors of 141 papers that were published in the second half of 2004 in one of four high-ranked journals published by the American Psychological Association (APA): Journal of Personality and Social Psychology (JPSP), Developmental Psychology (DP), Journal of Consulting and Clinical Psychology (JCCP), and Journal of Experimental Psychology: Learning, Memory, and Cognition (JEP:LMC)

Read more

Summary

Introduction

Statistical analyses of research data are quite error prone [1,2,3], accounts of statistical results may be inaccurate [4], and decisions that researchers make during the analytical phase of a study may lean towards the goal of achieving a preferred (significant) result [5,6,7,8] For these and other (ethical) reasons [9], many scientific journals like PLoS ONE [10] and professional organizations such as the American Psychological Association (APA) [11] have clear policies concerning the sharing of data after research results are published.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call