Abstract

One-year probability of default (PD) can be seen as a snapshot of lifetime PD. Two main reasons suggest treating one-year and lifetime ECLs separately. Firstly, banks have been developing one-year PD models over the last two decades for Basel II regulatory requirements as detailed in Section 2.1. Secondly, a building-block-structure split in one-year and lifetime PD facilitates the learning process. As a starting point, Section 2.2 focuses on how default events are defined for accounting purposes. As a following step, the process to build a consistent PD database is investigated in Section 2.3. Generalized linear models (GLMs) are explored as a paradigm for one-year PD estimates. Indeed, this approach is the most commonly adopted in the banking industry. A two-step scheme based on scorecard development and its calibration allows us to estimate point-in-time (PIT) PDs. In recent years, growing attention has been devoted to big data. Machine learning (ML) algorithms play a key role in this field, as detailed in Section 2.4. Classification and regression trees (CARTs), bagging, random forest, and boosting are studied both to challenge existing models and explore new PD modelling solutions. If a wide data availability encourages the use of data driven methods, low default portfolios and data scarcity are other challenges one may need to face. Indeed, limited number of defaults, requires bespoke methods. Likewise, lack of deep historical data forces banks to develop ad hoc approaches for one-year PD models as detailed in Section 2.5. Examples and case studies are explored throughout the chapter by means of R software. The laboratory Section 2.6 provides details on how to perform the implementation by means of SAS language.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call