Abstract

We consider the problem of identifying inaccurate default probability estimates in credit rating systems. Since the validation of these estimates usually entails performing multiple tests, there is an increased risk of erroneously dismissing correctly calibrated default probabilities. We use multiple-testing procedures to control this risk of committing type-I errors as measured by the family-wise error rate (FWER) and the false discovery rate for finite sample sizes. For the FWER, we also consider procedures that take possible discreteness of the data (and the test statistics) into account. The performance of these methods is illustrated in a simulation setting and for empirical default data. The results show that both types of multiple-testing procedure can serve as helpful tools for identifying inaccurate estimates while maintaining a predefined level of type-I error.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.