Abstract

Explainable Machine Learning comprises methods and techniques that enable users to better understand the machine learning functioning and results. This work proposes an ontology that represents explainable machine learning experiments, allowing data scientists and developers to have a holistic view, a better understanding of the explainable machine learning process, and to build trust. We developed the ontology by reusing an existing domain-specific ontology (ML-SCHEMA) and grounding it in the Unified Foundational Ontology (UFO), aiming at achieving interoperability. The proposed ontology is structured in three modules: (1) the general module, (2) the specific module, and (3) the explanation module. The ontology was evaluated using a case study in the scenario of the COVID-19 pandemic using healthcare data from patients, which are sensitive data. In the case study, we trained a Support Vector Machine to predict mortality of patients infected with COVID-19 and applied existing explanation methods to generate explanations from the trained model. Based on the case study, we populated the ontology and queried it to ensure that it fulfills its intended purpose and to demonstrate its suitability.

Highlights

  • Artificial Intelligence (AI) and Machine Learning (ML) have been extensively explored due to their ability to learn and perform autonomous tasks, and the potential to achieve better results than humans [1,2]

  • The opaqueness of inscrutable ML models can be remedied by extracting rules that mimic the black-box as closely as possible, since some insight is gained into the logical workings of the ML model by obtaining a set of rules that mimic the model’s predictions [24]

  • By performing different queries for each competency questions (CQs), we demonstrated that the ontology is able to answer all CQs, which is the first step to validate the ontology

Read more

Summary

Introduction

Artificial Intelligence (AI) and Machine Learning (ML) have been extensively explored due to their ability to learn and perform autonomous tasks, and the potential to achieve better results than humans [1,2]. Among ML models, there are inherently intelligible algorithms, as opposed to inscrutable ones. Models are inherently intelligible to the degree that a human can predict how a change to a feature in the input can affect the output [2]. Inscrutable models are more complex and harder to explain, it is more challenging to understand the reason for their results. Examples are complex neural networks or deep learning. For this reason, these models are often considered black-boxes [3]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call