Abstract

Software testing is the process of fixing bugs by changing the design of the software or simple logic of the software. Researchers have proposed many tools and methods using machine learning techniques to assist practitioners in decision making and automating software engineering tasks. These tools help to find the faulty classes at the starting phase of the software development life cycle. After finding faulty classes using these tools, testers used different techniques to find and fix these faults. The early identification of the faults or bugs fixing process helps to improves the quality of software and reduces the cost required to fix these faults or bugs. The primary objective of this work is to understand whether faults present in code elements are indicators of problems in the design of the software or not. This work investigates the impact of bug fixing operation on four popular internal quality attributes such as complexity, cohesion, inheritance, and coupling. The above investigation has been validated using thirteen different projects. Furthermore, we have also investigated the possibility of prediction models for predicting changes in internal quality attributes. These prediction models are trained using five different classifiers on balance data as well as original data and validated using fivefold cross-validation. The experimental results show that the predictive power of models using LSSVM with the polynomial kernel is better as compared to other techniques. The experimental results also show that the bugs are present in the class having at least one critical attribute in more than 80% of cases. Furthermore, the consistent value of AUC reveals that the prediction of changes in the internal quality attribute is possible using source code metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call