Abstract

Calibration in binary prediction models, that is, the agreement between model predictions and observed outcomes, is an important aspect of assessing the models' utility for characterizing risk in future data. A popular technique for assessing model calibration first proposed by D. R. Cox in 1958 involves fitting a logistic model incorporating an intercept and a slope coefficient for the logit of the estimated probability of the outcome; good calibration is evident if these parameters do not appreciably differ from 0 and 1, respectively. However, in practice, the form of miscalibration may sometimes be more complicated. In this article, we expand the Cox calibration model to allow for more general parameterizations and derive a relative measure of miscalibration between two competing models from this more flexible model. We present an example implementation using data from the US Agency for Healthcare Research and Quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call