Abstract

Summary Probability predictions from binary regressions or machine learning methods ought to be calibrated: if an event is predicted to occur with probability $x$, it should materialize with approximately that frequency, which means that the so-called calibration curve $p(\cdot)$ should equal the identity, i.e., $p(x) = x$ for all $x$ in the unit interval. We propose honest calibration assessment based on novel confidence bands for the calibration curve, which are valid subject to only the natural assumption of isotonicity. Besides testing the classical goodness-of-fit null hypothesis of perfect calibration, our bands facilitate inverted goodness-of-fit tests whose rejection allows for the sought-after conclusion of a sufficiently well-specified model. We show that our bands have a finite-sample coverage guarantee, are narrower than those of existing approaches, and adapt to the local smoothness of the calibration curve $p$ and the local variance of the binary observations. In an application to modelling predictions of an infant having low birth weight, the bounds give informative insights into model calibration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call