Abstract

We developed a new probabilistic model to assess the impact of recommendations rectifying the reproducibility crisis (by publishing both positive and ‘negative‘ results and increasing statistical power) on competing objectives, such as discovering causal relationships, avoiding publishing false positive results, and reducing resource consumption. In contrast to recent publications our model quantifies the impact of each single suggestion not only for an individual study but especially their relation and consequences for the overall scientific process. We can prove that higher-powered experiments can save resources in the overall research process without generating excess false positives. The better the quality of the pre-study information and its exploitation, the more likely this beneficial effect is to occur. Additionally, we quantify the adverse effects of both neglecting good practices in the design and conduct of hypotheses-based research, and the omission of the publication of ‘negative‘ findings. Our contribution is a plea for adherence to or reinforcement of the good scientific practice and publication of ‘negative‘ findings.

Highlights

  • Reproducibility can be defined by a number of ways [1]

  • To evaluate the robustness of our model against deviations from its basic assumption, we introduced further extensions to the model. These are explained in the following paragraphs (for details check supporting information (S2 Text)): While the model basically postulates that the rules of good scientific practice (GSP) are followed, it includes the possibility to deviate from GSP and thereby increase the possibility of positive results. To model these deviations we introduce parameter u, which is defined in accordance to Ioannides [12]: the ‘proportion of probed analyses that would not have been “research findings,” but end up presented and reported as such, because of bias.’

  • We focus on the effects of statistical power (1-β) and probability to publish ‘negative’/null results (Ppub) on the scientific gain (g), number of false positives, and total number of samples required throughout the entire research process

Read more

Summary

Introduction

In a recent ‘Perspective‘ in PLoS Biology [1] an inclusive definition of irreproducibility was adopted that encompasses the existence and propagation of one or more errors, flaws, inadequacies, or omissions that prevent replication of results. The authors estimated that the irreproducibility of published scientific data ranges from 51% to 89%, which is supported by meta-research of recent years. The result of a survey conducted by the Nature journal is not unexpected [10]. It seems, as the scientific community is concerned about the lack of reproducibility. Scientists responding to the survey assume that the reproducibility of published scientific studies is less than expected and desired. 90% of 1,576 scientist respondents agreed there is a significant or slight reproducibility crisis

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call