Abstract

PurposeExpertise for auditing AI systems in medical domain is only now being accumulated. Conformity assessment procedures will require AI systems: (1) to be transparent, (2) not to rely decisions solely on algorithms, or (3) to include safety assurance cases in the documentation to facilitate technical audit. We are interested here in obtaining transparency in the case of machine learning (ML) applied to classification of retina conditions. High performance metrics achieved using ML has become common practice. However, in the medical domain, algorithmic decisions need to be sustained by explanations. We aim at building a support tool for ophthalmologists able to: (i) explain algorithmic decision to the human agent by automatically extracting rules from the ML learned models; (ii) include the ophthalmologist in the loop by formalising expert rules and including the expert knowledge in the argumentation machinery; (iii) build safety cases by creating assurance argument patterns for each diagnosis.MethodsFor the learning task, we used a dataset consisting of 699 OCT images: 126 Normal class, 210 with Diabetic Retinopathy (DR) and 363 with Age Related Macular Degeneration (AMD). The dataset contains patients from the Ophthalmology Department of the County Emergency Hospital of Cluj-Napoca. All ethical norms and procedures, including anonymisation, have been performed. We applied three machine learning algorithms: decision tree (DT), support vector machine (SVM) and artificial neural network (ANN). For each algorithm we automatically extract diagnosis rules. For formalising expert knowledge, we relied on the normative dataset (Invernizzi et al. in Ophthalmol Retina 2(8):808–815, 2018). For arguing between agents, we used the Jason multi-agent platform. We assume different knowledge base and reasoning capabilities for each agent. The agents have their own optical coherence tomography (OCT) images on which they apply a distinct machine learning algorithm. The learned model is used to extract diagnosis rules. With distinct learned rules, the agents engage in an argumentative process. The resolution of the debate outputs a diagnosis that is then explained to the ophthalmologist, by means of assurance cases.ResultsFor diagnosing the retina condition, our AI solution deals with the following three issues: first, the learned models are automatically translated into rules. These rules are then used to build an explanation by tracing the reasoning chain supporting the diagnosis. Hence, the proposed AI solution complies with the requirement that “algorithmic decision should be explained to the human agent”. Second, the decision is not solely based on ML-algorithms. The proposed architecture includes expert knowledge. The diagnosis is taken based on exchanging arguments between ML-based algorithms and expert knowledge. The conflict resolution among arguments is verbalised, so that the ophthalmologist can supervise the diagnosis. Third, the assurance cases are generated to facilitate technical audit. The assurance cases structure the evidence among various safety goals such as: machine learning methodology, transparency, or data quality. For each dimension, the auditor can check the provided evidence against the current best practices or safety standards.ConclusionWe developed a multi-agent system for retina conditions in which algorithmic decisions are sustained by explanations. The proposed tool goes behind most software in medical domain that focuses only on performance metrics. Our approach helps the technical auditor to approve software in the medical domain. Interleaving knowledge extracted from ML-models with expert knowledge is a step towards balancing the benefits of ML with explainability, aiming at engineering reliable medical applications.

Highlights

  • Patients expect physicians to comprehensibly explain decisions that have an impact on them

  • This high performance obtained through deep learning models – which are black box models – rises at least three practical challenges: 1) How explanations can be provided to the users? 2) How assurance cases can be build for audit and safety approval? 3) How expert knowledge can be included in the loop? These challenges are not distinct, but rather interleave each other: if one extracts knowledge from the learned models, the reasoning can be traced to generate explanations and to build safety cases

  • The system diagnoses and explains three retina conditions (AM D, Diabetic Retinopathy (DR), and N ormal) using 18 parameters extracted from Optical Coherence Tomography (OCT) images

Read more

Summary

Introduction

Patients expect physicians to comprehensibly explain decisions that have an impact on them. With the current surge of interest in Machine Learning (ML)-based medical software, explaining decisions made based on black-box ML models remains challenging [2]. These challenges are not distinct, but rather interleave each other: if one extracts knowledge from the learned models, the reasoning can be traced to generate explanations and to build safety cases. 3. To build safety cases by creating assurance argument patterns for each diagnosis. To build safety cases by creating assurance argument patterns for each diagnosis This design goes behind most software in medical domain that focuses only on performance metrics. We propose an argumentation-based decision function after receiving results from three ML classifiers and one expert agent This expert agent contains rules manually formalised from normative data for retina, such as the data found by Invernizzi et al [13]. Treatment: if detected early, we could prevent severe vision loss; iv) modify the volume and the thickness of the macular retina

Diagnosing by machine learning and expert knowledge
Rules extracted from machine learning models
Rules formalised from expert knowledge
Argumentative patterns
Conflict resolution
Explaining diagnosis
Introducing the ophthalmologist in the loop
Discussions and Related Work
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.