Abstract

Over the last decade, researchers, practitioners, and regulators have had intense debates about how to treat the data collection threshold in operational risk modeling. Several approaches have been employed to fit the loss severity distribution: the empirical approach, the “naive” approach, the shifted approach, and the truncated approach. Since each approach is based on a different set of assumptions, different probability models emerge. Thus, model uncertainty arises. The main objective of this paper is to understand the impact of model uncertainty on the value-at-risk (VaR) estimators. To accomplish that, we take the bank’s perspective and study a single risk. Under this simplified scenario, we can solve the problem analytically (when the underlying distribution is exponential) and show that it uncovers similar patterns among VaR estimates to those based on the simulation approach (when data follow a Lomax distribution). We demonstrate that for a fixed probability distribution, the choice of the truncated approach yields the lowest VaR estimates, which may be viewed as beneficial to the bank, whilst the “naive” and shifted approaches lead to higher estimates of VaR. The advantages and disadvantages of each approach and the probability distributions under study are further investigated using a real data set for legal losses in a business unit (Cruz 2002).

Highlights

  • Basel II/III and Solvency II are the leading international regulatory frameworks for banking and insurance industries, and mandate that financial institutions build separate capital reserves for operational risk

  • According to loss distribution approach (LDA), the risk-based capital is an extreme quantile of the annual aggregate loss distribution, which is called value-at-risk or VaR

  • In practice, typical scenarios would be near F (t) = 0.9 with moderate- or heavy-tailed severity distributions, which corresponds to quite unfavorable patterns in the table

Read more

Summary

Introduction

Basel II/III and Solvency II are the leading international regulatory frameworks for banking and insurance industries, and mandate that financial institutions build separate capital reserves for operational risk. As is known in practice, the severity distribution is a key driver of the capital estimate (Opdyke 2014) This is the part of the aggregate model where initial assumptions about the data collection threshold are most influential. The main objective of this paper is to understand the impact of model uncertainty on risk measurements, and (hopefully) help settle the debate about the treatment of data collection threshold in the context of capital estimation. Solving such a problem under a general setup (i.e., by considering many interdependent risks and multiple stakeholders) is only possible through extensive simulations, but that would not produce much insight. Key probabilistic features of the generalized Pareto distribution are presented, and several asymptotic theorems of mathematical statistics are specified

Model Uncertainty
Motivation
Empirical Model
Parametric Models
Example 1
Example 2
Real-Data Example
Model Fitting
Model Validation
VaR Estimates
Model Predictions
Concluding Remarks
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call