Abstract

Exploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.

Highlights

  • Medical decision making is one of the most relevant real world domains where intelligent support is necessary to help human experts master the ever growing complexity

  • Afterwards, we introduce Inductive Logic Programming (ILP) as powerful approach of interpretable machine learning which naturally allows to combine reasoning and learning

  • We presented a framework for making use of mutual explanations for joint decision making in medicine

Read more

Summary

Introduction

Medical decision making is one of the most relevant real world domains where intelligent support is necessary to help human experts master the ever growing complexity. The user has only access to the input information (for instance a medical image) and the resulting classifier decision as output The reasoning underlying this decision remains intransparent. Another challenge when applying machine learning in medicine and in many other real world domains is that the amount and quality of data often cannot meet the demands of highly data intensive machine learning approaches: Classes. We present the research project Transparent Medical Expert Companion in which we aim at developing an approach for such a balanced human-AI partnership by making machine learning based decisions in medicine transparent, comprehensible, and correctable. We present how mutual explanations can be realised by extending the ILP system Aleph [30] This extension allows the medical expert to correct explanations to constrain model adaption. The expert can inspect these scans and either change their label or modify the rules again

Image Based Medical Data with Spatial Relations
Exploiting Mutual Explanations for Learning
Conclusions
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call