Abstract
Sentiment analysis methods have been developed to automatically extract opinions from texts, e.g. online reviews. Here we explore the idea of adopting sentiment analysis to extract grant reviewers' opinions. This paper examines the relationship between the sentiments of grant reviews and reviewed proposals’ funding decisions made by agencies. We define peer reviews’ prediction as to what degree reviews’ positiveness/negativeness, via sentiment extraction, can indicate proposals’ chance of success in receiving funding. Building on a corpus of peer review texts and related documents from an Irish science funding agency, we conduct sentiment extraction by manual and two simple automated coding (TextBlob and VADER). A comparison between the three results has been made. We find the manual and automated coding results to be overall consistent, but manual coding is still the most accurate and reliable for peer review studies. Furthermore, the three coding results show strong prediction of proposals’ funding decisions, which indicates that reviewers' opinions are largely adopted by agencies.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have