Abstract

This paper seeks to identify computationally efficient importance sampling (IS) algorithms for estimating large deviation probabilities for the loss on a portfolio of loans. Related literature typically assumes that realised losses on defaulted loans can be predicted with certainty, i.e., that loss given default (LGD) is non-random. In practice, however, LGD is impossible to predict and tends to be positively correlated with the default rate and the latter phenomenon is typically referred to as PD-LGD correlation (here PD refers to probability of default, which is often used synonymously with default rate). There is a large literature on modelling stochastic LGD and PD-LGD correlation, but there is a dearth of literature on using importance sampling to estimate large deviation probabilities in those models. Numerical evidence indicates that the proposed algorithms are extremely effective at reducing the computational burden associated with obtaining accurate estimates of large deviation probabilities across a wide variety of PD-LGD correlation models that have been proposed in the literature.

Highlights

  • This paper seeks to identify computationally efficient importance sampling (IS) algorithms for estimating large deviation probabilities for the loss on a portfolio of loans

  • There is a large literature on modelling stochastic loss given default (LGD) and PD-LGD correlation, but there is a paucity of literature on using importance sampling to estimate large deviation probabilities in those models

  • Note that since q( x, z) = 0 whenever μ(z) ≥ x, the large deviation approximation (LDA) suggests that P( L N > x |Z = z) ≈ 1 whenever z lies in the region of interest

Read more

Summary

Introduction

This paper seeks to identify computationally efficient importance sampling (IS) algorithms for estimating large deviation probabilities for the loss on a portfolio of loans. In practice the number of exposures is large (e.g., in the thousands) and prudent risk management requires one to assume that the individual losses are correlated. Importance sampling (IS) is a variance reduction technique that has the potential to significantly reduce the computational burden associated with obtaining accurate estimates of large deviation probabilities. The seminal paper in the area is (Glasserman and Li 2005), other papers include (Chan and Kroese 2010) and (Scott and Metzler 2015) It is well documented empirically, that portfolio-level LGD is stochastic, but positively correlated with the portfolio-level default rate as seen, for instance, in any of the studies listed in (Kupiec 2008) or (Frye and Jacobs 2012). It is worth noting that the we do not require the components of Z to be independent of one another, etc. for the components of Yi

Large Portfolios and the Region of Interest
Systematic Risk Factors
Individual Losses
Conditional Tail Probabilities
Conditional Densities
Proposed Algorithm
2: For each scenario m
General Principles
Identifying the Ideal IS Densities
Approximating the Ideal IS Densities
Summary and Intuition
One- and Two-Stage Estimators
4: Return p
Large First-Stage Weights
Large Rejection Constants
Computing θ
PD-LGD Correlation Framework
Exploring the Parameter Space
Implementation
Selecting the IS Density for the Systematic Risk Factors
First Stage
Computing Parameters in the Two-Factor Model
Computing Parameters in the One-Factor Model
Trimming Large Weights
Second Stage
Approximating θ
Sampling Individual Losses
Efficiency of the Second Stage
Performance Evaluation
Statistical Accuracy
Computational Time
Overall Performance
Findings
Concluding Remarks
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.