Abstract
Explainable Artificial Intelligence (XAI) has emerged as a critical domain to demystify the opaque decision-making processes of machine learning models, fostering trust and understanding among users. Among various XAI methods, SHAP (SHapley Additive exPlanations) has gained prominence for its theo- retically grounded approach and practical applicability. The paper presents a comprehensive exploration of SHAP’s effectiveness compared to other promi- nent XAI methods.Methods such as LIME (Local Interpretable Model-agnostic Explanations), permutation importance, Anchors and partial dependence plots are examined for their respective strengths and limitations. Through a detailed analysis of their principles, strengths, and limitations through reviewing differ- ent research papers based on some important factors of XAI, the paper aims to provide insights into the effectiveness and suitability of these methods.The study offers valuable guidance for researchers and practitioners seeking to incorporate XAI into their AI systems. Keywords: SHAP, XAI, LIME, permutation importance, Anchors and par- tial dependence plots
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have