Abstract

Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.