Abstract

BackgroundPrevious studies have indicated that the model for end-stage liver disease (MELD) score may fail to predict post-transplantation patient survival. Similarly, other scores (donor MELD score, balance of risk score) that have been developed to predict transplant outcomes have not gained widespread use. These scores are typically derived using linear statistical models. This study aimed to compare the performance of traditional statistical models with machine learning approaches for predicting survival following liver transplantation. Materials and methodsData were obtained from 785 deceased donor liver transplant recipients enrolled in the Korean Organ Transplant Registry (2014–2019). Five machine learning methods (random forest, artificial neural networks, decision tree, naïve Bayes, and support vector machine) and four traditional statistical models (Cox regression, MELD score, donor MELD score and balance of risk score) were compared to predict survival. ResultsAmong the machine learning methods, the random forest yielded the highest area under the receiver operating characteristic curve (AUC-ROC) values (1-month = 0.80; 3-month = 0.85; and 12-month = 0.81) for predicting survival. The AUC-ROC values of the Cox regression analysis were 0.75, 0.86, and 0.77 for 1-month, 3-month, and 12-month post-transplant survival, respectively. However, the AUC-ROC values of the MELD, donor MELD, and balance of risk scores were all below 0.70. Based on the variable importance of the random forest analysis in this study, the major predictors associated with survival were cold ischemia time, donor ICU stay, recipient weight, recipient BMI, recipient age, recipient INR, and recipient albumin level. As with the Cox regression analysis, donor ICU stay, donor bilirubin level, BAR score, and recipient albumin levels were also important factors associated with post-transplant survival in the RF model. The coefficients of these variables were also statistically significant in the Cox model (p < 0.05). The SHAP ranges for selected predictors for the 12-month survival were (−0.02,0.10) for recipient albumin, (−0.05,0.07) for donor bilirubin and (−0.02,0.25) for recipient height. Surprisingly, although not statistically significant in the Cox model, recipient weight, recipient BMI, recipient age, or recipient INR were important factors in our random forest model for predicting post-transplantation survival. ConclusionMachine learning algorithms such as the random forest were superior to conventional Cox regression and previously reported survival scores for predicting 1-month, 3-month, and 12-month survival following liver transplantation. Therefore, artificial intelligence may have significant potential in aiding clinical decision-making during liver transplantation, including matching donors and recipients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call