Abstract

ContextThe SZZ algorithm is the de facto standard for labeling bug fixing commits and finding inducing changes for defect prediction data. Recent research uncovered potential problems in different parts of the SZZ algorithm. Most defect prediction data sets provide only static code metrics as features, while research indicates that other features are also important.ObjectiveWe provide an empirical analysis of the defect labels created with the SZZ algorithm and the impact of commonly used features on results.MethodWe used a combination of manual validation and adopted or improved heuristics for the collection of defect data. We conducted an empirical study on 398 releases of 38 Apache projects.ResultsWe found that only half of the bug fixing commits determined by SZZ are actually bug fixing. If a six-month time frame is used in combination with SZZ to determine which bugs affect a release, one file is incorrectly labeled as defective for every file that is correctly labeled as defective. In addition, two defective files are missed. We also explored the impact of the relatively small set of features that are available in most defect prediction data sets, as there are multiple publications that indicate that, e.g., churn related features are important for defect prediction. We found that the difference of using more features is not significant.ConclusionProblems with inaccurate defect labels are a severe threat to the validity of the state of the art of defect prediction. Small feature sets seem to be a less severe threat.

Highlights

  • Defect prediction is an active direction of software engineering research with hundreds of publications

  • 16 While this criterion is irrelevant for the evaluation of the defect labeling, we selected the projects with the goal to provide a new defect prediction data set

  • We summarized existing data sets and found that the SZZ algorithm is the standard approach for defect labeling and that most data sets only offer a limited set of features

Read more

Summary

Introduction

Defect prediction is an active direction of software engineering research with hundreds of publications. The systematic literature review by Hall et al (2012) already found 208 studies on defect prediction published between 2000 and 2010, many more have been published since . Many of these studies were enabled by the sharing of data, highlighted by the early efforts from the PROMISE repository (Menzies et al 2015), which is nowadays known as Seacraft (Menzies et al 2017). A recent literature review on cross-project defect prediction highlights that these and other data sets have become the de facto standard for defect prediction research (Hosseini et al 2017). There is evidence that shared defect prediction data is affected by two problems: 1) problems with the defect labels; and 2) limitations regarding the features used by researchers

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call