Abstract

Artificial Intelligence (AI) has become increasingly integral to healthcare, aiding medical professionals in disease diagnosis. However, a critical challenge faced by practitioners lies in the trustworthiness of diagnosed results, as machine learning models often lack transparent explanations for their predictions. Explainable AI (XAI) addresses this, allowing practitioners and patients to comprehend and trust outcomes. This study employs the liver disease patient dataset to predict liver disease, vital in India where yearly, 10 lakh people face new liver cirrhosis diagnoses with high mortality. Early detection difficulties underscore the need for transparent predictions. Using Kaggle data, Random Forest, XGBoost, and Explainable Boosting Machine (EBM) classifiers are compared, with EBM outshining, achieving a 99.8% accuracy score. The study concludes that the EBM classifier is a superior and transparent liver disease predictor.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call