Abstract

Chronic liver disease (CLD) is a major health concern for millions of people all over the globe. Early prediction and identification are critical for taking appropriate action at the earliest stages of the disease. Implementing machine learning methods in predicting CLD can greatly improve medical outcomes, reduce the burden of the condition, and promote proactive and preventive healthcare practices for those at risk. However, traditional machine learning has some limitations which can be mitigated through ensemble learning. Boosting is the most advantageous ensemble learning approach. This study aims to improve the performance of the available boosting techniques for CLD prediction. Seven popular boosting algorithms of Gradient Boosting (GB), AdaBoost, LogitBoost, SGBoost, XGBoost, LightGBM, and CatBoost, and two publicly available popular CLD datasets (Liver disease patient dataset (LDPD) and Indian liver disease patient dataset (ILPD)) of dissimilar size and demography are considered in this study. The features of the datasets are ascertained by exploratory data analysis. Additionally, hyperparameter tuning, normalisation, and upsampling are used for predictive analytics. The proportional importance of every feature contributing to CLD for every algorithm is assessed. Each algorithm's performance on both datasets is assessed using k-fold cross-validation, twelve metrics, and runtime. Among the five boosting algorithms, GB emerged as the best overall performer for both datasets. It attained 98.80% and 98.29% accuracy rates for LDPD and ILPD, respectively. GB also outperformed other boosting algorithms regarding other performance metrics except runtime.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call