Abstract

The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets. Plenty of machine intelligence models has offered the effective localization of defects. Some models can precisely locate the faulty with more than 95% accuracy, resulting in demand for trustworthy models in fault localization. Confidence and trustworthiness within machine intelligence-based software models can only be achieved via explainable artificial intelligence in Fault Localization (XFL). The current study presents a model for generating counterfactual interpretations for the fault localization model's decisions. Neural system approximations and disseminated presentation of input information may be achieved by building a nonlinear neural network model. That demonstrates a high level of proficiency in transfer learning, even with minimal training data. The proposed XFL would make the decision-making transparent simultaneously without impacting the model's performance. The proposed XFL ranks the software program statements based on the possible vulnerability score approximated from the training data. The model's performance is further evaluated using various metrics like the number of assessed statements, confidence level of fault localization, and Top-N evaluation strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call