The Open Science Collaboration's new platform for publishing the results of replicated experiments is commendable and indeed bold (“Psychology's bold initiative,” S. Carpenter, News Focus, 30 March, p. [1558][1]). However, as convenient and efficient as this platform may prove to be, it should not distract us from the need to revise and extend more conventional and reliable publication methods so as to include more replicated studies. We must ensure that replicated studies, not just in psychology but across the sciences, are scrutinized just as closely as original studies, through the tried-and-tested peer-review system. To do this, we need better incentives and support for scientists who want to perform replications. The argument that peer-reviewed journals should publish more failed replicated experiments is not new ([ 1 ][2]). A handful of journals publishing detailed methods and protocols have emerged. A broader solution could involve individual scientific disciplines launching their own peer-reviewed journal(s) devoted entirely to publishing the results of replicated experiments—both failed and successful. It would be important to ensure that these journals don't fall into the trap of a publication bias toward failed replications. Journals with this mission would help to raise the status of replication studies, and scientists would be less likely to see replications as a waste of their time and resources. In particular, postdoctoral and graduate-level scientists could benefit from this opportunity as a means of learning new experimental techniques and gaining early publications. Systematic funding for replication studies should also be made available. Funding bodies such as the National Institutes of Health and National Science Foundation could designate awards for replication studies, with priority being given to the replication of novel experiments of international scientific importance. Such initiatives are important, not just to address scientific fraud ([ 2 ][3]), but to ensure that reliability—one of the key principles upon which the authority of science is based—is publicly upheld. 1. [↵][4]1. J. Giles , Nature 442, 344 (2006). [OpenUrl][5][CrossRef][6][PubMed][7] 2. [↵][8]1. J. Crocker, 2. L. M. Cooper , Science 334, 1182 (2011). [OpenUrl][9][Abstract/FREE Full Text][10] [1]: /lookup/doi/10.1126/science.335.6076.1558 [2]: #ref-1 [3]: #ref-2 [4]: #xref-ref-1-1 View reference 1 in text [5]: {openurl}?query=rft.jtitle%253DNature%26rft.stitle%253DNature%26rft.aulast%253DGiles%26rft.auinit1%253DJ.%26rft.volume%253D442%26rft.issue%253D7101%26rft.spage%253D344%26rft.epage%253D347%26rft.atitle%253DThe%2Btrouble%2Bwith%2Breplication.%26rft_id%253Dinfo%253Adoi%252F10.1038%252F442344a%26rft_id%253Dinfo%253Apmid%252F16871184%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [6]: /lookup/external-ref?access_num=10.1038/442344a&link_type=DOI [7]: /lookup/external-ref?access_num=16871184&link_type=MED&atom=%2Fsci%2F336%2F6083%2F801.3.atom [8]: #xref-ref-2-1 View reference 2 in text [9]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DCrocker%26rft.auinit1%253DJ.%26rft.volume%253D334%26rft.issue%253D6060%26rft.spage%253D1182%26rft.epage%253D1182%26rft.atitle%253DAddressing%2BScientific%2BFraud%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.1216775%26rft_id%253Dinfo%253Apmid%252F22144584%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [10]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEzOiIzMzQvNjA2MC8xMTgyIjtzOjQ6ImF0b20iO3M6MjQ6Ii9zY2kvMzM2LzYwODMvODAxLjMuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9