Abstract

Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.

Highlights

  • IntroductionDeep Learning techniques are dominant in the modern approach to Artificial Intelligence (AI)

  • Deep Learning techniques are dominant in the modern approach to Artificial Intelligence (AI).Their use is widespread, due to their very high performance in prediction and classification tasks across application areas [1,2,3]

  • The leading implementation of modern AI based on deep learning model is barely intelligible to the layman, as well as the main technical solutions proposed in the field of explainable AI are usable only by experts of the field

Read more

Summary

Introduction

Deep Learning techniques are dominant in the modern approach to Artificial Intelligence (AI). In the context of symbolic systems, Knowledge Graphs (KGs) [9] and their underlying semantic technologies are a promising solution for the issue of understandability [10] These large networks of semantic entities and relationships provide a useful backbone for several reasoning mechanisms, ranging from consistency checking [11], to causal inference [12]. Knowledge matching of deep learning components, including input features, hidden unit and layers, and output predictions with KGs and ontology components can make the internal functioning of algorithms more transparent and comprehensible; in addition, query and reasoning mechanisms of KGs and ontologies enable the conditions for advanced explanations, namely cross-disciplinary and interactive explanations.

Background
Explanations for AI Experts
Technical Issues in a Connectionist Perspective
Explainable Systems for AI Experts
Explanations for Non-Insiders
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.