Abstract

Software development involves plenty of risks, and errors exist in software modules represent a major kind of risk. Software defect prediction techniques and tools that identify software errors play a crucial role in software risk management. Among software defect prediction techniques, classification is a commonly used approach. Various types of classifiers have been applied to software defect prediction in recent years. How to select an adequate classifier (or set of classifiers) to identify error prone software modules is an important task for software development organizations. There are many different measures for classifiers and each measure is intended for assessing different aspect of a classifier. This paper developed a performance metric that combines various measures to evaluate the quality of classifiers for software defect prediction. The performance metric is analyzed experimentally using 13 classifiers on 11 public domain software defect datasets. The results of the experiment indicate that support vector machines (SVM), C4.5 algorithm, and K-nearest-neighbor algorithm ranked the top three classifiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call