Abstract

AbstractUsing a novel combination of methods and data sets from two national funding agency contexts, this study explores whether review sentiment can be used as a reliable proxy for understanding peer reviewer opinions. We measure reviewer opinions via their review sentiments on both specific review subjects and proposals’ overall funding worthiness with three different methods: manual content analysis and two dictionary-based sentiment analysis algorithms (TextBlob and VADER). The reliability of review sentiment to detect reviewer opinions is addressed by its correlation with review scores and proposals’ rankings and funding decisions. We find in our samples that review sentiments correlate with review scores or rankings positively, and the correlation is stronger for manually coded than for algorithmic results; manual and algorithmic results are overall correlated across different funding programs, review sections, languages, and agencies, but the correlations are not strong; and manually coded review sentiments can quite accurately predict whether proposals are funded, whereas the two algorithms predict funding success with moderate accuracy. The results suggest that manual analysis of review sentiments can provide a reliable proxy of grant reviewer opinions, whereas the two SA algorithms can be useful only in some specific situations.

Highlights

  • Expert reviewers are central to peer review

  • Using a novel combination of methods and datasets from two national funding agency contexts, this study explores whether review sentiment can be used as a reliable proxy for understanding peer reviewer opinions

  • We find in our samples that 1) review sentiments correlate with review scores or rankings positively, and the correlation is stronger for manually coded than for algorithmic results; 2) manual and algorithmic results are overall correlated across different funding programmes, review sections, languages, and agencies, but the correlations are not strong; 3) manually coded review sentiments can quite accurately predict whether proposals are funded, whereas the two algorithms predict funding success with moderate accuracy

Read more

Summary

Introduction

Expert reviewers are central to peer review. Based on their recommendations, scientific journals select manuscripts to publish, hiring committees select faculty to hire, and funding agencies select grant proposals to fund. Scientific journals select manuscripts to publish, hiring committees select faculty to hire, and funding agencies select grant proposals to fund The latter case, grant peer review, is of special interest as reviewers are asked to assess scientific work that has not yet been performed. (2021) Analysing Sentiments in Peer Review Reports: Evidence from Two Science Funding Agencies.

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call