Abstract

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.

Highlights

  • Consider what each layer learns in a convolutional neural network (CNN) for image analysis: early layers are responsible for extracting lowlevel features such as edges and simple shapes, while later layers usually extract high-level features whose semantics are understood with respect to an application domain

  • While we have more to say about evaluation below, what is clear is that evaluation of explanatory systems is based on how the explainee confirms their own understanding of an explanation or the conclusion of an explanatory dialogue

  • Nowhere is this more important than the history of abductive reasoning and its connection to the history of scientific reasoning, which culminates in the construction and use of causal models as a basis for causal explanations

Read more

Summary

Introduction

Many have noted the value of interactive XAI systems and dialogue systems [10], which provide a basis for an explainee to submit and receive responses to questions about a model prediction, including alternative explanations, and build deeper trust of the system Further note that these four central concepts informally suggest the need to somehow identify the quality of explanations. This will help provide sufficient detail to articulate the relationship between current explanatory concepts and their relationship to historical roots, e.g., to consider the emerging demands on the properties of a formal definition of interpretability by assessing the classical formal systems view of interpretability.

Explainability and Interpretability
Alternative Explanations
Debugging Versus Explanation
Is There a Trade-off between Explanatory Models and Classification Accuracy?
Assessing the Quality of Explanations
Expert Systems and Abduction
Scientific Explanation
Causality
Classification of Research Trends Based on Levels of Explanation
Concurrently Constructed Explanations
Post-Hoc Explanations
Model-Dependent Explanations
Model-Independent Explanations
Classification Based on Levels of Explanation
Method
Level 0
Level 1
Level 2
Level 3
Level 4
XAI Architecture
User-Guided Explanation
Measuring the Value of Explanations
Summary and Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.