Abstract
This study introduces the first-ever self-explanatory interface for diagnosing diabetes patients using machine learning. We propose four classification models (Decision Tree (DT), K-nearest Neighbor (KNN), Support Vector Classification (SVC), and Extreme Gradient Boosting (XGB)) based on the publicly available diabetes dataset. To elucidate the inner workings of these models, we employed the machine learning interpretation method known as Shapley Additive Explanations (SHAP). All the models exhibited commendable accuracy in diagnosing patients with diabetes, with the XGB model showing a slight edge over the others. Utilising SHAP, we delved into the XGB model, providing in-depth insights into the reasoning behind its predictions at a granular level. Subsequently, we integrated the XGB model and SHAP’s local explanations into an interface to predict diabetes in patients. This interface serves a critical role as it diagnoses patients and offers transparent explanations for the decisions made, providing users with a heightened awareness of their current health conditions. Given the high-stakes nature of the medical field, this developed interface can be further enhanced by including more extensive clinical data, ultimately aiding medical professionals in their decision-making processes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.