BackgroundClinical prediction models have the potential to improve the quality of care and enhance patient safety outcomes. A Computer-aided Risk Scoring system (CARSS) was previously developed to predict in-hospital mortality following emergency admissions based on routinely collected blood tests and vitals. We aimed to externally validate the CARSS model. MethodsIn this retrospective external validation study, we considered all adult (≥18 years) emergency medical admissions discharged between 11/11/2020 and 11/11/2022 from The Rotherham Foundation Trust (TRFT), UK. We assessed the predictive performance of the CARSS model based on its discriminative (c-statistic) and calibration characteristics (calibration slope and calibration plots). ResultsOut of 32,774 admissions, 20,422 (62.3 %) admissions were included. The TRFT sample had similar demographic characteristics to the development sample but had higher mortality (6.1 % versus 5.7 %). The CARSS model demonstrated good discrimination (c-statistic 0.87 [95 % CI 0.86–0.88]) and good calibration to the TRFT dataset (slope = 1.03 [95 % CI 0.98–1.08] intercept = 0 [95 % CI −0.06–0.07]) after re-calibrating for differences in baseline mortality (intercept = 0.96 [95 % CI 0.90–1.03] before re-calibration). ConclusionIn summary, the CARSS model is externally validated after correcting the baseline risk of death between development and validation datasets. External validation of the CARSS model showed that it under-predicted in-hospital mortality. Re-calibration of this model showed adequate performance in the TRFT dataset.
Read full abstract