Abstract

The process of software code review is a well-established practice in software engineering. Previous research identified quality metrics for code review. However, to our knowledge, this paper is the first that uses those review smells and metrics as predictors in software defect prediction. We used review process metrics used in other studies as well as created new ones. A machine learning model is fed with various process metrics (code review) and product metrics (software code) to be able to predict if a pull request might introduce a defect. For the GitHub repositories examined, the mean absolute errors for predictive models were equal to 0.26 (for the model built on product metrics only), 0.29 (for model built on review metrics only), and 0.25 (for model built on combined metrics). The results indicate that the quality of the code review conveys additional valuable information that can be utilized to better predict software defects. In fact, review metrics alone appeared to be almost as good predictors of software defects as investigated since a long time and widely used software product metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call