Abstract

The transformation of credit scores into probabilities of default plays an important role in credit risk estimation. The linear logistic regression has developed into a standard calibration approach in the banking sector. With the advent of machine learning techniques in the discriminatory phase of credit risk models, however, the standard calibration approach is currently under scrutiny again. In particular, the assumptions behind the linear logistic regression provide critics with a target. The previous literature has converted the calibration problem into a regression task without any loss of generality. In this paper, we draw on recent academic results in order to suggest two new single-parameter families of differentiable functions as candidates for this regression. The derivation of these two families of differentiable functions is based on the maximum entropy principle, and thus it relies on a minimum number of assumptions. We compare the performance of four calibration approaches on a realworld data set and find that one of the new single-parameter families outperforms the linear logistic regression. Further, we develop an approach to quantify that part of the general estimation error of probabilities of default which stems from the statistical dispersion of the discriminatory power.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.