Abstract

The cost of software testing could be reduced if faulty entities were identified prior to the testing phase, which is possible with software fault prediction (SFP). In most SFP models, machine learning (ML) methods are used, and one aspect of improving prediction accuracy with these methods is tuning their control parameters. However, parameter tuning has not been addressed properly in the field of software analytics, and the conventional methods (such as basic Differential Evolution, Random Search, and Grid Search) are either not up-to-date, or suffer from shortcomings, such as the inability to benefit from prior experience, or are overly expensive. This study aims to examine and propose parameter tuners, called DEPTs, based on different variants of Differential Evolution for SFP with the Swift-Finalize strategy (to reduce runtime), which in addition to being up-to-date, have overcome many of the challenges associated with common methods. An experimental framework was developed to compare DEPTs with three widely used parameter tuners, applied to four common data miners, on 10 open-source projects, and to evaluate the performance of DEPTs, we used eight performance measures. According to our results, the three tuners out of five DEPTs improved prediction accuracy in more than 70% of tuned cases, and occasionally, they exceeded benchmark methods by over 10% in case of G-measure. The DEPTs took reasonable amounts of time to tune parameters for SFP as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call