Abstract

Peer code review has been proven to be an effective practice for quality assurance, and widely adopted by commercial companies and open source communities as GitHub. However, identifying an appropriate code reviewer for a pull request is a non-trivial task considering the large number of candidate reviewers. Several approaches have been proposed for reviewer recommendation, yet none of them has conducted a complete comparison to explore which one is more effective. This paper aims at conducting an experimental evaluation of the commonly-used and state-of-the-art approaches for code reviewer recommendation. We begin with a systematic review of approaches for code reviewer recommendation, and choose six approaches for experimental evaluation. We then implement these approaches and conduct reviewer recommendation on 12 large-scale open source projects with 53,005 pull requests spanning two years. Results show that there is no golden rule when selecting code reviewer recommendation approaches, and the best approach varies in terms of different evaluation metrics (e.g., Top-5 Accuracy, MRR) and experimental projects. Nevertheless, TIE, which utilizes the textual similarity and file path similarity, is the most promising one. We also explore the sensitivity of these approaches to training data, and compare their time cost. This approach provides new insights and practical guidelines for choosing approaches for reviewer recommendation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.