Abstract

Today we salute the Editors! The Editors recently highlighted the issue of research that can be ‘ephemeral...and destined to the dustbin of irreproducible results’. They acknowledged the critical responsibility of Editors and Journals to identify those papers that form the foundation upon which others can build (Bosenberg et al., 2013). The issue of irreproducibility of the majority of preclinical scientific reports has deservedly received considerable attention of late. Several groups have indicated that we have a serious problem, particularly in the ‘top-tier’ journals, with between ~75 and 90% of publications unable to be reproduced (Begley and Ellis, 2012; Booth, 2011; Prinz et al., 2011). This finding was also confirmed by scientists representing both Novartis and Astra Zeneca during their presentations at the American Association of Cancer Research annual meeting in 2012. This is a widespread, systemic problem involving many laboratories and multiple journals. It is likely attributable to the intense pressure to publish. The Editors of this Journal have sought to address this challenge head on with new author guidelines. The challenge is to strike appropriate balance: we certainly do not want to be so rigid that we stifle real innovation, but real innovation has to reproducible for it to be real. Some of the changes that are being introduced require authors to provide data to support the validation of reagents including antibodies, siRNA, and small molecule inhibitors. This is a major advance as many irreproducible publications overlook this critical step. Too frequently, the effects of a multipotent small molecule inhibitor are attributed to the authors’ favorite molecule, ignoring the other targets that could be equally responsible. Similarly, antibodies that detect multiple antigens are used, illegitimately, for immunohistochemistry, and the results again attributed to the authors’ favorite antigen. Now, the controls for those experiments will be required. The Editors are also demanding that observations be confirmed in more than one cell line. This is another advance, as studies in a single cell line tell us little or nothing about a more general process. The Editors want to see inclusion of positive and negative controls and a statement as to how many time experiments have been repeated. Data selection is also specifically addressed with a requirement for a statement as to how many experiments showed a similar result and how many did not. It would seem unnecessary to have these as specified requirements for publications, as these would typically be regarded as essential and routine elements of standard scientific practice. Yet frequently in the biological journals, experiments are presented that lack these critical elements. The changes introduced by these Editors represent real progress! Like this Journal, Nature has recently announced new guidelines for authors. They too are to be congratulated. That journal will provide more space for methods, and authors will need to ‘provide precise characterization of key reagents such as cell lines and antibodies’ (Nature, 2013). However, while it is true that ‘exploratory investigations cannot be done with the same level of statistical rigor as hypothesis-testing studies’ (Nature, 2013), there is little acknowledgment of the important difference between the two and the degree of confidence that can be ascribed as a consequence. Further, there is still an inadequate focus on the need for scientific rigor to underpin any result, statistically significant or not. In fact, our current scientific system appears to have evolved to place more value on the more speculative, less reliable ‘exploratory investigations’ than on those studies that confirm or refute a hypothesis. The latter in particular are poorly valued: those who document the invalidity of a published piece of work seldom get a welcome from journals, funding agencies, conference organizers, even as money and effort are wasted on false and irreproducible ‘exploratory investigations’. The focus on ‘exploratory investigations’ is perhaps understandable, as we like to be scientifically titillated with a new, exciting idea even if it does not stand the test of time. However, these irreproducible ‘exploratory investigations’ represent the bulk of publications in biology journals. While they may have some value, the ideas that are robust are more likely to move the field forward. Perhaps, the exploratory investigations in toptier journals that typically generate the excitement, the press releases, the speaker invitations, and the grants should be labeled for what they are ‘Exploratory Studies’. One of the most troubling aspects of our analysis, which spanned a decade (Begley, 2013; Begley and Ellis, 2012), was the realization that leading investigators were unable to reproduce their own results in their own laboratory when experiments were performed blinded. It is, after all, easier to obtain the desired result in the absence of blinding! Simply demanding experiments be

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call