In his news story “Reproducibility in psychology” (18 December 2015, p. [1459][1]), J. Bohannon reports on the progress researchers in the field of psychology have made in implementing the practice of reproducing experimental results. He points out that preregistration—a procedure in which researchers first specify their hypotheses and methods and then publish the results of their analyses regardless of outcome—has helped achieve reproducibility goals. This type of methodology can help minimize poor statistical practices, such as doing multiple tests on data and only reporting those that are statistically significant. However, Bohannon's conclusion that “[i]f everyone followed that protocol, false positives might all but disappear from journals” is a bit overstated. In the absence of poor statistical practice and pressures to find and publish statistically significant results, the rate of false-positive results should correspond to the experimenter's choice of test significance level, typically denoted as α. For a statistical test in which the conventional choice of α = 0.05 is used, one would expect 5% of experiments to generate spurious “significant” results, and the probability of independently reproducing a false-positive experiment result at the same significance level is 0.05 x 0.05 = 0.0025. Under these conditions, this means we can expect that one-quarter of one percent of the experiments will have a false-positive result in the original experiment as well as a false positive in the reproduced experiment, thus seemingly confirming the original erroneous result. Although this seems like a very small chance, leading to the suggestion that false positives “might all but disappear,” the observed number of false positives is also a function of the number of scientific publications, which by one estimate in 2006 was 1.35 million articles ([ 1 ][2]). Others have estimated that the scientific output may double every 10 years ([ 2 ][3]). If correct, there could be 2.7 million or so scientific papers published in 2016. So, imagine if half of all papers published in 2016 (about 1.35 million) contained the results of exactly one statistical test conducted at a significance level of α = 0.05, and if every one of those tests was reproduced, then we would expect to observe around 3375 “confirmed” spurious results (0.0025 x 1,350,000 = 3375). Of course, that is much better than the 67,500 false positives in the original papers (0.05 x 1,350,000), but it is rather larger than “all but disappear.” Reproducibility is clearly important, and we should support and encourage those who promote it—across all fields, not just psychology—as a crucial part of the scientific enterprise. In particular, moving away from publication standards based solely on the statistical significance of a single experiment or a single set of observed data to those based on evidence that observed results can be reproduced is a critical change that we must make in the academic publishing culture. However, we must also recognize that, even within the most careful and rigorous experimental framework, erroneous conclusions are always possible. We should thus always maintain a healthy skepticism when assessing study results. 1. [↵][4]1. P. O. Larson, 2. M. von Ins , Scientometrics 84, 575 (2010). [OpenUrl][5][CrossRef][6][PubMed][7][Web of Science][8] 2. [↵][9]1. D. J. de S. Price , Little Science, Big Science (Columbia Univ. Press, New York, 1963). [1]: /lookup/doi/10.1126/science.350.6267.1458 [2]: #ref-1 [3]: #ref-2 [4]: #xref-ref-1-1 View reference 1 in text [5]: {openurl}?query=rft.stitle%253DScientometrics%26rft.aulast%253DLarsen%26rft.auinit1%253DP.%2BO.%26rft.volume%253D84%26rft.issue%253D3%26rft.spage%253D575%26rft.epage%253D603%26rft.atitle%253DThe%2Brate%2Bof%2Bgrowth%2Bin%2Bscientific%2Bpublication%2Band%2Bthe%2Bdecline%2Bin%2Bcoverage%2Bprovided%2Bby%2BScience%2BCitation%2BIndex.%26rft_id%253Dinfo%253Adoi%252F10.1007%252Fs11192-010-0202-z%26rft_id%253Dinfo%253Apmid%252F20700371%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [6]: /lookup/external-ref?access_num=10.1007/s11192-010-0202-z&link_type=DOI [7]: /lookup/external-ref?access_num=20700371&link_type=MED&atom=%2Fsci%2F351%2F6273%2F569.3.atom [8]: /lookup/external-ref?access_num=000280274400003&link_type=ISI [9]: #xref-ref-2-1 View reference 2 in text