Abstract

With the evolution of artificial intelligence, Machine Learning (ML) techniques have become more powerful predictors, and accordingly, the use of ML techniques has become a part of our daily life in different application scenarios such as disease diagnosis, movie recommendation, monitoring system, or detection of malicious attacks. Although ML provides high accurate predictions, it suffers from opacity. By behaving like a black box it excluded users about how to reach particular decisions. Interpretable Machine Learning (IML) is a recent technology that offers a promising solution to the opaqueness problem of complex ML techniques. It provides transparency of how the inner workings of ML lead to certain decisions and allows users to be aware of the decision-making process. Especially, in critical scenarios such as healthcare, it may become extremely important to know the reasons that affect the decision as well as the result. In this study, we aim to show the benefits of IML over a healthcare case study. In experiments, we employ SHAP and LIME IML models for the Random Forest (RF) and Gradient Boosting (GB) algorithms for the problem of diagnosing diabetes and its explanations. Overall results exhibit that applying IML models to complex and hard-to-interpret ML techniques ensures detailed interpretability while maintaining accuracy. We also perform experiments for local interpretability by focusing on an instance, which is another advantage of IML.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call