Abstract

Research ObjectiveHeart failure (HF)–related hospitalizations are a growing public health burden, especially among older adults. Risk calculators for HF readmissions identify patients at high risk for readmissions who may benefit from outpatient interventions to improve outcomes and prevent readmissions. We hypothesized that incorporating additional variables and using machine learning approaches would improve the performance of these existing 30‐day readmission risk calculators.Study DesignWe evaluated the performance of several published risk calculators for predicting 30‐day readmission after heart failure hospitalizations: 1) using updating the coefficients based on data from a new, ethnically diverse HF population; 2) developing a new model that incorporates additional variables in addition to updated coefficients; and 3) developing new models with all variables using machine learning approaches. We used an 80%/20% split sampling for development and validation testing. The risk calculators tested included the LACE+ Index and Yale CORE, which included traditional clinical variables such as comorbidities, laboratory values, prior utilization, and vital signs. For the model with additional variables, we included all variables used in the original models with the addition of Comorbidity Point Score (COPS), Laboratory‐based Acute Physiology Score (LAPS), cardiovascular medications, discharge status, and socioeconomic status. We evaluated the original model and the original plus model, the additional variables using logistic regression. For the machine learning approaches, we used lasso penalized regression and gradient boosting with k‐fold cross‐validation to avoid overfitting. We assessed model performance using area under the curve (AUC) and calibration plots.Population StudiedWe identified 38,234 adults hospitalized for HF between 2012 and 2017 within Kaiser Permanente Northern California, an integrated health care delivery system covering over 4.4 million members.Principal FindingsDiscrimination (AUC) was poor using original models, LACE+ [0.60 (0.59‐0.62)] and Yale CORE [0.57 (0.56‐0.59)]. Including the additional variables resulted in a small improvement in AUC: LACE+ [0.62 (0.60‐0.64)] and Yale CORE [0.62 (0.60‐0.64)]. The lasso model [0.67 (0.65, 0.68) and gradient boosting model [0.67 (0.65‐0.68)] resulted in greater improvement. Calibration plots showed generally good calibration across all models with modest improvements after adding the additional data domains and using the machine learning approaches.ConclusionsIncorporating additional data domains led to small, statistically significant improvements in model discrimination while maintaining good calibration for published models to predict readmission. Machine learning approaches resulted in even greater improvement and overall moderate discrimination.Implications for Policy or PracticeWe were able to increase the utility of these published risk calculators for readmission after discharge from a HF hospitalization by including additional data domains and using machine learning approaches. Health systems attempting to adapt published risk calculators for readmissions should consider including these data domains and using machine learning approaches to improve performance.Primary Funding SourceThe Permanente Medical Group Delivery Science Research Program.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call