Abstract

Various evaluation datasets are used to evaluate the performance of information retrieval-based bug localization (IRBL) techniques. To accurately evaluate the IRBL and furthermore improve the performance, it is strongly required to analyze the validity of these datasets in advance. To this end, we surveyed 50 previous studies, collected 41,754 bug reports, and found out critical problems that affect the validity of results of performance evaluation. They are in both the ground truth and the search space. These problems arise from using different bug types without clearly distinguishing them. We divided the bugs into production- and test-related bugs. Based on this distinction, we investigate and analyze the impact of the bug type on IRBL performance evaluation. Approximately 18.6% of the bug reports were linked to non-buggy files as the ground truth. Up to 58.5% of the source files in the search space introduced noise into the localization of a specific bug type. From the experiments, we validated that the average precision changed in approximately 90% of the bug reports linked with an incorrect ground truth; we determined that specifying a suitable search space changed the average precision in at least half of the bug reports. Further, we showed that these problems can alter the relative ranks of the IRBL techniques. Our large-scale analysis demonstrated that a significant amount of noise occurs, which can compromise the evaluation results. An important finding of this study is that it is essential to consider the bug types to improve the accuracy of the performance evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call