Abstract

Explainable artificial intelligence (XAI) holds significant importance in enhancing the reliability and transparency of network decision-making. SHapley Additive exPlanations (SHAP) is a game-theoretic approach for network interpretation, attributing confidence to inputs features to measure their importance. However, SHAP often relies on a flawed assumption that the model’s features are independent, leading to incorrect results when dealing with correlated features. In this paper, we introduce a novel manifold-based Shapley explanation method, termed Latent SHAP. Latent SHAP transforms high-dimensional data into low-dimensional manifolds to capture correlations among features. We compute Shapley values on the data manifold and devise three distinct gradient-based mapping methods to transfer them back to the high-dimensional space. Our primary objectives include: (1) correcting misinterpretations by SHAP in certain samples; (2) addressing the challenge of feature correlations in high-dimensional data interpretation; and (3) reducing algorithmic complexity through Manifold SHAP for application in complex network interpretations. Code is available at https://github.com/Teriri1999/Latent-SHAP.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.