Abstract

Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.

Highlights

  • Modern machine learning (ML) approaches improved tremendously in terms of quality and are able to even exceed human performance in many cases, they currently lack the ability to provide an explicit declarative knowledge representation and hide the underlying explanatory structure (Holzinger et al, 2017)

  • All of the just mentioned points of criticism have led to a steadily increasing importance of the research areas Explainable Artificial Intelligence, Interpretable Machine Learning and Interactive ML that we summarize and refer to as Comprehensible Artificial Intelligence (cAI)

  • As LIME is a representative of perturbation-based explanation systems and constitutes state of the art within xAI for image as well as for text classification, we propose an architecture to overcome some of the drawbacks mentioned above especially for text classification combined with LIME

Read more

Summary

INTRODUCTION

Modern ML approaches improved tremendously in terms of quality (prediction accuracy) and are able to even exceed human performance in many cases, they currently lack the ability to provide an explicit declarative knowledge representation and hide the underlying explanatory structure (Holzinger et al, 2017). All of the just mentioned points of criticism have led to a steadily increasing importance of the research areas Explainable Artificial Intelligence (xAI), Interpretable Machine Learning (iML) and Interactive ML that we summarize and refer to as cAI These primarily aim at developing approaches that in addition to a precise prediction accuracy fulfill concepts like interpretability, explainability, confidence including stability and robustness, causality, interactivity, liability and liability security in a legal sense, socio-technical and domain aspects, bias awareness as well as uncertainty handling. Explanation and interpretation techniques need to be in accordance with the individual domain and social as well as ethical requirements Causality is another necessary concept (Pearl, 2009) and refers to making underlying mechanisms transparent beyond computing correlations (Holzinger et al, 2019) to derive the true reasons that lead to a particular outcome. Domain requirements, legal as well as ethic aspects participate and contribute to an overall understanding of cAI

FUNDAMENTALS OF cAI TRANSITION APPLIED TO MEDICINE
Explanation Generation and Visual
Verbal Explanations
Interactive Machine Learning
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.