Abstract

Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.

Highlights

  • Explainable Artificial Intelligence (XAI) is about opening the “black box” decision making of Machine Learning (ML) algorithms so that decisions are transparent and understandable

  • We worked upon several models for achieving the best results and compared the results internally for a 1D-Convolutional Neural Network (CNN) model versus several ML classifiers (e.g., Logistic Regression, Random Forest)

  • We provided the first application of 1D-CNN and Layer-wise Relevance Propagation (LRP) on structured data

Read more

Summary

Introduction

Explainable Artificial Intelligence (XAI) is about opening the “black box” decision making of Machine Learning (ML) algorithms so that decisions are transparent and understandable. This ability to explain decision models is important to data scientists, end-users, company personnel, regulatory authorities, or any stakeholder who has a valid remit to ask questions about the decision making of such systems. XAI incorporates a suite of ML techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners [1]. Interest in XAI research has been growing along with the capabilities and applications of modern AI systems. That can in turn widen the adoption of AI solutions and deliver greater business value

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call