Abstract

Fairness in peer review is of vital importance in academic activities. Current peer review systems focus on matching suitable experts with proposals but often ignore the existence of outliers. Previous research has shown that outlier scores in reviews could decrease the fairness of these systems. Therefore, outlier detection in peer review systems is essential for maintaining fairness. In this paper, we introduce a novel method that employs data-crossing analysis to detect outlier scores, aiming to improve the reliability of peer review processes. We utilize a confidential dataset from a review organization. Due to the inability to access ground truth scores, we systematically devise data-driven deviations from an estimated ground truth through data-crossing analysis. These deviations reveal inconsistencies and abnormal scoring behaviors of different reviewers. Subsequently, the review process is strengthened by providing a structured mechanism to identify and mitigate biases. Extensive experiments demonstrate its effectiveness in improving the accuracy and fairness of academic assessments, contributing to the broader application of AI-driven methodologies to achieve more reliable and equitable outcomes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.