Abstract
The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets. Plenty of machine intelligence models has offered the effective localization of defects. Some models can precisely locate the faulty with more than 95% accuracy, resulting in demand for trustworthy models in fault localization. Confidence and trustworthiness within machine intelligence-based software models can only be achieved via explainable artificial intelligence in Fault Localization (XFL). The current study presents a model for generating counterfactual interpretations for the fault localization model's decisions. Neural system approximations and disseminated presentation of input information may be achieved by building a nonlinear neural network model. That demonstrates a high level of proficiency in transfer learning, even with minimal training data. The proposed XFL would make the decision-making transparent simultaneously without impacting the model's performance. The proposed XFL ranks the software program statements based on the possible vulnerability score approximated from the training data. The model's performance is further evaluated using various metrics like the number of assessed statements, confidence level of fault localization, and Top-N evaluation strategies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.