Abstract

Evening trails by Brian Smithson. Shared from https://www.flickr.com/photos/smithser/10709730483/ under a CC BY 2.0 license Peer Review involves several divergent groups. Authors, who wish to see the results of scrupulous work published, in the anticipation of career promotion or secure funding. Editors, who are under pressure to identify sound and novel research. Reviewers, who try to fit a thoughtful and time-consuming process within busy schedules, often receiving no credit for it. And Publishers, who compete in a transforming landscape of fast and abundant science publishing. But has Peer Review always been as challenging as it is nowadays? Some might argue so, though phrases such as ‘Publish or Perish’, or the ‘Impact Factor rat race’ are relatively new additions to a Researcher's vocabulary. Moreover, the frequently observed focus of science policies on short-term goals and the publication speed enabled or even imposed by our information age are all new variables. Worryingly, advances in online handling systems have actually enabled author and reviewer misconduct, as in the recent case of 101 papers that went through fake peer review according to the Ministry of Science and Technology of China [1]. This scenery is by no means comparable to the scientific landscape of 1960s when regular Peer Review was first introduced by journals [2, 3]. Although several changes have been introduced to the Peer Review system to address current challenges, the Peer Review Process has been until recently preserved in its primary version: metadata on Peer Review efficiency have been scarcely available; implementation of emerging technologies to detect issues such as inappropriate statistical analysis, figure manipulation and data fabrication has been limited; Peer Reviewers lack systematic training on conducting constructive and ethical peer review; and incentives for performing high-quality peer review are often scarce. However, this scenery is now changing, and the first steps towards Peer Review modernization have already been made! The realization that systematic research on Peer Review needs to be conducted is growing. Already in July 2017, Carole J. Lee and David Moher, philosophers specializing in Publication Science at the University of Washington and Ottawa Hospital Research Institute, respectively, published in Science a call for promoting scientific integrity via research on Peer Review [4]. Lee and Moher call Publishers to enable the opening of what they name ‘the black box of peer review’. They highlight the need for large systematic experimental studies to assess the various peer review practices and improve quality and transparency. Going a step further, they present incentives for Publishers to adopt new software systems for assisting Peer Review, evaluating Peer Review Efficiency and producing metadata on, for example, reviewer scores and comments linked to manuscript content, or reviewer expertise and affiliations. The suggested incentives are hard to ignore: taking into consideration the persisting plea to de-emphasize impact factor, as well as the need to exclude predatory journals, introduction of indicators for Peer Review transparency will soon gain momentum. The wide application of indicators for Peer Review transparency would provide strong incentives for a Publishers’ initiative to support research that improves the quality of the Peer Review process. Such a collective initiative could even be supported through updating the Transparency and Openness Promotion (TOP) Guidelines to facilitate journal meta-research. One might argue that this is yet another idealistic call. However, reality might disprove them. The upcoming International Congress on Peer Review and Scientific Publication, which was launched by JAMA and BMJ back in 1986 to encourage systematic research on Peer Review [5, 6], features a rich programme on metadata analysis, and will host several journal editors and publisher representatives. This is only an indication that the community is responsive to the general need for metadata analysis of Peer Review. In addition, the implementation of Open Peer Review by several publishers increases metadata availability. Through Open Peer Review the identities of both Authors and Reviewers are made public upon manuscript acceptance, often along with the Peer Review reports. EMBO Press journals and Nature Communications publish reviewer reports along with manuscripts on an optional basis [7, 8]. Similarly, BMJ [9], F1000Research [10] and about 70 journals from BioMed Central [11], all use variable models of Open Peer Review but share a common feature: they publish reviewer reports and names alongside the published article. In addition, Publishers are engaging in lively discussions on the future of Peer Review. For example, in 2016, BioMed Central and Digital Science organized a 1-day conference, the SpotOn London event, at which participants actively discussed ‘what peer review might look like in 2030’. This event led to the production of an informative report [12], and will hopefully help trigger more discussions, when repeated later this year. In parallel, Chris Graf, Research Integrity and Publishing Ethics Director at Wiley, has recently indicated openness to change: ‘our plan at Wiley […] includes continuous work […] to collect and record data on trends [and] Editorial initiatives and experiments designed to improve transparency and openness, and thus reproducibility. We are pleased to be an organizational signatory to the TOP guidelines. We publish several journals – the European Journal of Neuroscience in particular – that are leading proponents for ‘two-stage’ peer review adopted in the Registered Reports model. We will continue to encourage and support journals that experiment with this model, and others like it’ [13]. So, it seems that publishers are ready for a change. Most importantly, they are not alone in this effort. Policy makers also seem to recognize the need for systematic research on Peer Review. For example, the EU has initiated the COST Action ‘PEERE: New frontiers of peer review’. PEERE is a multidisciplinary international project running from 2014 to 2018 that involves quantitative and qualitative research on how peer review is conducted, seeking to make evidence-based recommendations for improving the current system. Thorough Peer Review seems to obstruct fast publication and vice versa. However, when it comes to time limitations faced by both Reviewers and Journal Editors, several emerging technologies can lend a helping hand. Some Publishers have already integrated technologies to waterproof the Peer Review Process: AAAS has recently launched a new web-based service, Peer Review Evaluation (PRE), that ensures adherence to journal Peer-Review policy, excluding fake or biased reports [14]. Probably most people engaged in Publishing have heard of Crossref, a web-based tool widely used to detect plagiarism issues, among other purposes. Similar technologies already exist to screen manuscripts for statistical credibility, image integrity and, even, data fabrication. Penelope was first conceived by its creator James Harwood as a web-based tool that would assist authors to adhere to guidelines when preparing a manuscript [15]. Upon collaboration with the equator network, the startup company Penelope Research has developed a product that might transform the future of Peer Review. Penelope is presently available to Publishers for checking manuscripts during submission. The checking includes, among others, the presence of appropriate formatting, the figure and table legends, references, or ethical statements. Importantly, Penelope checks for proper statistical analysis and can identify cases in which raw data of a manuscript had been previously published elsewhere. StatReviewer has recently emerged as another promising tool to assist the review of manuscripts for statistical soundness. It uses machine learning to control adherence to widely accepted statistical and methodological guidelines, such as CONSORT 2010. StatReviewer is currently being evaluated in an initial pilot study involving four clinical BioMed Central journals [16]. Would these developments mean that we are heading towards a fully automated Peer-Review system? As Chadwick C. DeVoss, Founder and President of StatReviewer argues in the 2016 SpotOn London report, ‘this is where a slippery slope gets extra slippery’ [12]. A fully-automated publishing line would only undermine exchange and communication between researchers. On the contrary, the aim here is solely to implement emerging technologies as a helping tool that will leave authors and reviewers valuable time for rewarding interactions. Researchers are being thoroughly trained for conceiving a project and experimenting in a wet or dry lab. Most postgraduate students also receive regular training on writing a manuscript and preparing scientific presentations. But what about conducting Peer Review? Considering the major impact of reviewing quality on scholarly publishing, minimal concerted efforts have been so far invested on training good Reviewers. With advice mainly restricted to journal guidelines and some editorials, budding Peer Reviewers seem to be in need of more support when expected to produce a good referee report. More than that, there is still an open debate on whether young PostDocs should be given the possibility for getting some hands-on experience with Peer Review. Lack of opportunities for young scientists to train on supervised Peer Review is rather unfortunate given the fact that most are open to both expanding their expertise and investing time on a scientific exchange as rewarding as Peer Review. Yet, both individual journal initiatives and a new platform dedicated to Peer Review have set the goal of changing this scenery. For example, the Genetics Society of America has just initiated a pilot programme for training reviewers on the ‘principles, purposes, and best practices of Peer Review’. Other Publishers opt for providing online packages with Peer Review training Material, including Webinars, as is the case with the Wiley Peer Review training. Going one step further, Publons, a platform focusing on Peer Review and aimed at Authors, Reviewers, Editors and Publishers alike, has developed the Publons Academy, a practical Peer Review training course for early career researchers. Publons Academy not only provides extensive material on evaluating novelty, methods, results, conclusions and ethical integrity in a manuscript but also enables supervised postpublication Peer Review. Experienced Reviewers are encouraged to sign up as supervisors and get acknowledged for this role. Acknowledgement is actually the groundbreaking component that has already earned Publons a constantly growing community of Reviewers, Editors and Journals. Reviewers get credit for every single report they prepare (even when the manuscript does not get accepted for publication). This sole fact is crucial for transforming Peer Review: as indicated in a survey conducted by Wiley, four of five researchers agree that reviewing is not sufficiently acknowledged by journals and research institutions [17]. With the ambition to transform this landscape, Publons provides credit, in the form of a verified peer review record, that can be considered as credential during internal institute evaluations, or at job and grant applications. The provision of such incentives may help to build a universal reviewer database. Such a database would assist journals to expand their pool of Reviewers that currently seems to be biased towards senior, male, American and European researchers, and is by no means representative of the current researcher and author demographics. Several developments occurring in parallel seem to steer towards a brand new future for the Peer-Review process. Whether this is an optimistic or realistic view, we are all to find out soon. But what are the actual views of researchers on burning questions about Peer Review transparency, quality and recognition? In times of change, it is essential for the main players to become vocal and put forward criticism and suggestions for solution. How do Reviewers view models of open Peer Review to ensure transparency? Do Authors and Reviewers welcome the use of emerging technologies as a part of the peer review process for the sake of faster publication? What do they consider as nonreplaceable values of nonautomated Peer Review? Do Authors profit from Peer Review more than is generally admitted and acknowledged? How do early career investigators feel about their inclusion in this system? Are senior researchers that have experienced the evolution of the Peer Review System over the past 20 years prepared to supervise younger colleagues? The debate is still on, and the more views available, the more can the system improve.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call