Abstract

Software defect prediction is an activity that aims at narrowing down the most likely defect-prone software modules and helping developers and testers to prioritize inspection and testing. This activity can be addressed by using Machine Learning techniques applied to software metrics datasets that are usually unlabelled, i.e. they lack modules classification in terms of defectiveness. To overcome this limitation, in addition to the usual data pre-processing operations to manage mission values and/or to remove inconsistencies, researches have to adopt an approach to label their unlabelled software datasets. The extraction of defectiveness data to label all the instances of the datasets is an extremely time and effort consuming operation. In literature, many studies have introduced approaches to build a defect prediction models on unlabelled datasets.In this paper, we describe the analysis of new unlabelled datasets from WLCG software, coming from HEP-related experiments and middleware, by using Machine Learning techniques. We have experimented new approaches to label the various modules due to the heterogeneity of software metrics distribution. We discuss a number of lessons learned from conducting these activities, what has worked, what has not and how our research can be improved.

Highlights

  • Machine learning (ML) as a means to help in different Software Engineering (SE) tasks, such as software defects prediction and test code generation, has been often considered in research studies in the last decades [1,2,3,4,5]

  • ML techniques are fed with input software data properly processed and collected in datasets that are composed of instances, i.e. software modules, and features, i.e. software metrics [6]

  • We have found that software projects have documentation related to code changes, like release notes, which can be exploited to provide an assessment of the defectiveness prediction in software

Read more

Summary

Background

Machine learning (ML) as a means to help in different Software Engineering (SE) tasks, such as software defects prediction and test code generation, has been often considered in research studies in the last decades [1,2,3,4,5]. To address the limitation of supervised-based learning techniques in constructing defect prediction models by using unlabelled datasets, researches have proposed various approaches which can be categorized in five groups. The Clustering, LAbelling, Metric selection, Instance selection (CLAMI) approach is based on a four-step procedure to be applied to the instances of an unlabelled dataset It is an automatizable approach, which does not involve human effort and relies on metrics’ values which may not always be comparable and may introduce bias. The extraction of the complete set of features (metrics and labels) is time and effort consuming: the selection of the right tool for metrics’ extraction can be difficult to conduct For these reasons, unlabelled datasets are the vast majority of software datasets. To perform defect prediction with unlabelled datasets it is necessary to find an automatizable way to label instances

Experimental Setup
Lessons Learned
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call