Abstract

Benchmarking performance across hospitals requires proper adjustment for differences in baseline patient and procedural risk. Recently, a Risk Stratification Index was developed from Medicare data, which used all diagnosis and procedure codes associated with each stay, but did not distinguish present-on-admission (POA) diagnoses from hospital-acquired diagnoses. We sought to (1) develop and validate a risk index for in-hospital mortality using only POA diagnoses, principal procedures, and secondary procedures occurring before the date of the principal procedure (POARisk) and (2) compare hospital performance metrics obtained using the POARisk model with those obtained using a similarly derived model which ignored the timing of diagnoses and procedures (AllCodeRisk). We used the 2004-2009 California State Inpatient Database to develop, calibrate, and prospectively test our models (n = 24 million). Elastic net logistic regression was used to estimate the two risk indices. Agreement in hospital performance under the two respective risk models was assessed by comparing observed-to-expected mortality ratios; acceptable agreement was predefined as the AllCodeRisk-based observed-to-expected ratio within ± 20% of the POARisk-based observed-to-expected ratio for more than 95% of hospitals. After recalibration, goodness of fit (i.e., model calibration) within the 2009 data was excellent for both models. C-statistics were 0.958 and 0.981, respectively, for the POARisk and AllCodeRisk models. The AllCodeRisk-based observed-to-expected ratio was within ± 20% of the POARisk-based observed-to-expected ratio for 89% of hospitals, which was slightly lower than the predefined limit of agreement. Consideration of POA coding meaningfully improved hospital performance measurement. The POARisk model should be used for risk adjustment when POA data are available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call