Abstract

PurposePeer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research's open peer review publishing platform.Design/methodology/approachPeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings.FindingsPeerJudge can predict F1000Research judgements from negative evaluations in reviewers' comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision.Originality/valuePeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments.

Highlights

  • Academic publishers manage millions of peer review reports for their journals, conferences, books, and publishing platforms, with usually at least two reviewers per document (Clarivate, 2018)

  • This paper introduces a transparent lexical sentiment analysis program, PeerJudge, to estimate the judgement of an academic article reviewer based upon praise or criticism in the accompanying peer review report

  • This section gives some background on open peer review since differences between open and closed peer review might influence the extent to which PeerJudge can work on both systems

Read more

Summary

Introduction

Academic publishers manage millions of peer review reports for their journals, conferences, books, and publishing platforms, with usually at least two reviewers per document (Clarivate, 2018). These reports are typically accompanied by a judgement, such as accept, minor revisions, major revisions, or reject. Assessing the quality of a peer review report is complex due to the variability in objectives and guidelines issued to reviewers across publishers (Jefferson et al, 2002) This variability can make it difficult for reviewers and for journal staff and editors digesting comments and decisions affecting the outcomes of submitted manuscripts. Studies of the quality of open peer review reports have produced mixed findings (Bravo et al, 2019; Jefferson et al, 2002), and so its effect may vary by field

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call