Abstract
Bug datasets have been created and used by many researchers to build and validate novel bug prediction models. In this work, our aim is to collect existing public source code metric-based bug datasets and unify their contents. Furthermore, we wish to assess the plethora of collected metrics and the capabilities of the unified bug dataset in bug prediction. We considered 5 public datasets and we downloaded the corresponding source code for each system in the datasets and performed source code analysis to obtain a common set of source code metrics. This way, we produced a unified bug dataset at class and file level as well. We investigated the diversion of metric definitions and values of the different bug datasets. Finally, we used a decision tree algorithm to show the capabilities of the dataset in bug prediction. We found that there are statistically significant differences in the values of the original and the newly calculated metrics; furthermore, notations and definitions can severely differ. We compared the bug prediction capabilities of the original and the extended metric suites (within-project learning). Afterwards, we merged all classes (and files) into one large dataset which consists of 47,618 elements (43,744 for files) and we evaluated the bug prediction model build on this large dataset as well. Finally, we also investigated cross-project capabilities of the bug prediction models and datasets. We made the unified dataset publicly available for everyone. By using a public unified dataset as an input for different bug prediction related investigations, researchers can make their studies reproducible, thus able to be validated and verified.
Highlights
Finding and eliminating bugs in software systems has always been one of the most important issues in software engineering
We were already familiar with the usage of the ckjm tool, so we chose to calculate the ckjm metrics for the Bug Prediction dataset. This way, we could assess all metrics of all tools, because the Bug Prediction dataset was originally created with Moose, so we have extended it with the OpenStaticAnalyzer 1.0 (OSA) metrics, and – for the sake of this comparison – with ckjm metrics
Our aim was not to create the best possible bug prediction model, but to show that the dataset is a usable source of learning data for researchers who want to experiment with bug prediction models
Summary
Finding and eliminating bugs in software systems has always been one of the most important issues in software engineering. Bug or defect prediction is a process by which we try to learn. Software Quality Journal (2020) 28:1447–1506 from mistakes committed in the past and build a prediction model to leverage the location and amount of future bugs. Many research papers were published on bug prediction, which introduced new approaches that aimed to achieve better precision values (Zimmermann et al 2009; Xu et al 2000; Hall et al 2012; Weyuker et al 2010). To carry out such experiments, bugs have to be associated with source code which in and of itself is a difficult task. It is necessary to use a version control system and a bug tracking system properly during the development process, and even in this case, it is still challenging to associate bugs with the problematic source code locations
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.