Abstract

The scholarly peer-reviewing system is the primary means to ensure the quality of scientific publications. An area or program chair relies on the reviewer’s confidence score to address conflicting reviews and borderline cases. Usually, reviewers themselves disclose how confident they are in reviewing a certain paper. However, there could be inconsistencies in what reviewers self-annotate themselves versus how the preview text appears to the readers. This is the job of the area or program chair to consider such inconsistencies and make a reasonable judgment. Peer review texts could be a valuable source of Natural Language Processing (NLP) studies, and the community is uniquely poised to investigate some inconsistencies in the paper vetting system. Here in this work, we attempt to automatically estimate how confident was the reviewer directly from the review text. We experiment with five data-driven methods: Linear Regression, Decision Tree, Support Vector Regression, Bidirectional Encoder Representations from Transformers (BERT), and a hybrid of Bidirectional Long-Short Term Memory (BiLSTM) and Convolutional Neural Networks (CNN) on Bidirectional Encoder Representations from Transformers (BERT), to predict the confidence score of the reviewer. Our experiments show that the deep neural model grounded on BERT representations generates encouraging performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call