Abstract

When comparing performances of two risk prediction models, several metrics exist to quantify prognostic improvement, including the change in the area under the Receiver Operating Characteristic curve, the Integrated Discrimination Improvement, the Net Reclassification Index at event rate, the change in Standardized Net Benefit, the change in Brier score, and the change in scaled Brier score. We explore the behavior and interrelationships between these metrics under multivariate normality in nested and nonnested model comparisons. We demonstrate that, within the framework of linear discriminant analysis, all six statistics are functions of squared Mahalanobis distance, a robust metric that properly measures discrimination by quantifying the separation between the risk scores of events and nonevents. These relationships are important for overall interpretability and clinical usefulness. Through simulation, we demonstrate that the performance of the theoretical estimators under normality is comparable or superior to empirical estimation methods typically used by investigators. In particular, the theoretical estimators for the Net Reclassification Index and the change in Standardized Net Benefit exhibit less variability in their estimates as compared to their empirically estimated counterparts. Finally, we explore how these metrics behave with potentially nonnormal data by applying these methods in a practical example based on the sex-specific cardiovascular disease risk models from the Framingham Heart Study. Our findings aim to give greater insight into the behavior of these measures and the connections existing among them and to provide additional estimation methods with less variability for the Net Reclassification Index and the change in Standardized Net Benefit.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call