Abstract

Development of a short timeframe (6-12 months) kidney failure risk prediction model may serve to improve transitions from advanced chronic kidney disease (CKD) to kidney failure and reduce rates of unplanned dialysis. The optimal model for short timeframe kidney failure risk prediction remains unknown. This retrospective study included 1757 consecutive patients with advanced CKD (mean age 66 years, estimated glomerular filtration rate 18 mL/min/1.73 m2). We compared the performance of Cox regression models using (a) baseline variables alone, (b) time-varying variables and machine learning models, (c) random survival forest, (d) random forest classifier in the prediction of kidney failure over 6/12/24 months. Performance metrics included area under the receiver operating characteristic curve (AUC-ROC) and maximum precision at 70% recall (PrRe70). Top-performing models were applied to 2 independent external cohorts. Compared to the baseline Cox model, the machine learning and time-varying Cox models demonstrated higher 6-month performance [Cox baseline: AUC-ROC 0.85 (95% CI 0.84-0.86), PrRe70 0.53 (95% CI 0.51-0.55); Cox time-varying: AUC-ROC 0.88 (95% CI 0.87-0.89), PrRe70 0.62 (95% CI 0.60-0.64); random survival forest: AUC-ROC 0.87 (95% CI 0.86-0.88), PrRe70 0.61 (95% CI 0.57-0.64); random forest classifier AUC-ROC 0.88 (95% CI 0.87-0.89), PrRe70 0.62 (95% CI 0.59-0.65)]. These trends persisted, but were less pronounced, at 12 months. The random forest classifier was the highest performing model at 6 and 12 months. At 24 months, all models performed similarly. Model performance did not significantly degrade upon external validation. When predicting kidney failure over short timeframes among patients with advanced CKD, machine learning incorporating time-updated data provides enhanced performance compared with traditional Cox models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call