Abstract

The majority of eXplainable Artificial Intelligence (XAI) methods assume the local linearity of the decision boundary, leading to significant errors when dealing with non-linear local decision boundaries. Moreover, explanation methods relying on perturbation samples yield only a constant value for feature contributions. This study aims to overcome these limitations by introducing a novel explanation method that accurately captures non-linear local classification boundaries or regression splines while also quantifying the uncertainty associated with each feature contribution. Expanding upon the state-of-the-art XAI method, Local Interpretable Model-agnostic Explanations (LIME), we propose a nonlinear local explainer called BMB-LIME (Bootstrap aggregating Multivariate adaptive regression splines Bayesian LIME). BMB-LIME is constructed using weighted multivariate adaptive regression splines (MARS) with bootstrap aggregating and models the uncertainty of feature contributions within a Bayesian framework. Through a series of experiments, we demonstrate the superior performance of the BMB-LIME local explainer over baseline methods in terms of local fidelity and stability. Its adjusted r-squared is statistically significant by at least 80.5 percent, and it consistently demonstrates over 89.1 percent consistency in feature contribution and Jaccard similarity across diverse real-world and simulated datasets. The proposed method not only enhances fidelity and stability but also improves the measurement of uncertainty in explanations, thereby significantly contributing to the trustworthiness and transparency of XAI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call