Abstract

Producing software of high quality is challenging in view of the large volume, size, and complexity of the developed software. Checking the software for faults in the early phases helps to bring down testing resources. This empirical study explores the performance of different machine learning model, fuzzy logic algorithms against the problem of predicting software fault proneness. The work experiments on the public domain KC1 NASA data set. Performance of different methods of fault prediction is evaluated using parameters such as receiver characteristics (ROC) analysis and RMS (root mean squared), etc. Comparison is made among different algorithms/models using such results which are presented in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call