Abstract

Explainable AI, as the word implies is a type of artificial intelligence which enables the explanation of learning models and focuses on why the system arrived at a particular decision, exploring its logical paradigms, contrary to the inherent black box nature of artificial intelligence. Similarly, machine learning interpretability allows users to comprehend the results of the learning models by providing reasoning for the decisions that it has arrived at. This nature of Explainable AI(XAI) and Interpretable Machine Learning (IML) is particularly helpful in the context of AI applications pertaining to healthcare and medical diagnosis. In this paper, we present a case study wherein we have focused on using the ELI5 XAI toolkit in conjunction with LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) algorithmic frameworks in Python, for determining if a patient is diabetic or not, based on a randomized clinical trial dataset. We also endeavor to point out trends and most vital factors that can help clinicians and researchers in analyzing patient data, in conjunction with machine learning and artificial intelligence outputs. Having explanations for machine learning models allows for higher degree of interpretability and paves the way for accountability and transparency in medical and other fields of data analysis. We explore the aforementioned paradigms in the context of this research paper, paving the way for developing an accountable, transparent and robust data analytics framework using XAI & IML.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.