Abstract

BackgroundRisk prediction models are used in critical care for risk stratification, summarising and communicating risk, supporting clinical decision-making and benchmarking performance. However, they require validation before they can be used with confidence, ideally using independently collected data from a different source to that used to develop the model. The aim of this study was to validate the Intensive Care National Audit & Research Centre (ICNARC) model using independently collected data from critical care units in Scotland.MethodsData were extracted from the Scottish Intensive Care Society Audit Group (SICSAG) database for the years 2007 to 2009. Recoding and mapping of variables was performed, as required, to apply the ICNARC model (2009 recalibration) to the SICSAG data using standard computer algorithms. The performance of the ICNARC model was assessed for discrimination, calibration and overall fit and compared with that of the Acute Physiology And Chronic Health Evaluation (APACHE) II model.ResultsThere were 29,626 admissions to 24 adult, general critical care units in Scotland between 1 January 2007 and 31 December 2009. After exclusions, 23,269 admissions were included in the analysis. The ICNARC model outperformed APACHE II on measures of discrimination (c index 0.848 versus 0.806), calibration (Hosmer-Lemeshow chi-squared statistic 18.8 versus 214) and overall fit (Brier’s score 0.140 versus 0.157; Shapiro’s R 0.652 versus 0.621). Model performance was consistent across the three years studied.ConclusionsThe ICNARC model performed well when validated in an external population to that in which it was developed, using independently collected data.Electronic supplementary materialThe online version of this article (doi:10.1186/1471-2253-14-116) contains supplementary material, which is available to authorized users.

Highlights

  • Risk prediction models are used in critical care for risk stratification, summarising and communicating risk, supporting clinical decision-making and benchmarking performance

  • The following admissions were excluded from the analysis: admissions flagged in the database as ‘Exclude from severity of illness scoring’; readmissions of the same patient within the same acute hospital stay; admissions missing the outcome of acute hospital mortality; admissions missing age, location prior to admission or primary reason for admission to the critical care unit; and admissions for whom the primary reason for admission was unable to be mapped onto the Intensive Care National Audit & Research Centre (ICNARC) Coding Method

  • Data were extracted from the Scottish Intensive Care Society Audit Group (SICSAG) database for 29,626 admissions to 24 adult, general critical care units between 1 January 2007 and 31 December 2009

Read more

Summary

Introduction

Risk prediction models are used in critical care for risk stratification, summarising and communicating risk, supporting clinical decision-making and benchmarking performance They require validation before they can be used with confidence, ideally using independently collected data from a different source to that used to develop the model. The Intensive Care National Audit & Research Centre (ICNARC) risk prediction model was developed and validated using data from the Case Mix Programme with the objective of improving on the existing models [7]. It has subsequently been validated using further data from the Case Mix Programme, including external validation among critical care units that joined the programme after the development of the model [8] but it has never undergone validation using independently collected data

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call