Abstract

Out-of-sample prediction is the ultimate validity test of risk models for which theoretical appeal and justifications are irrelevant to the prediction results. Empirically, we show that all prominent risk models including Cumulative Prospect Theory (CPT), Expected Utility Theory (EUT), Rank Dependent expected Utility (RDU) and mean-variance have poor out-of-sample predictability. To accurately predict risk, we develop Geometric Dispersion Theory (GDT), which is based on two asymmetric simple dispersion functions, one for risk averse and the other for risk prone. Without using any parameters, they are convex functions of consequences and concave functions of probabilities. The linear tradeoffs of these two dispersion functions and the expected value (dubbed “risk triad”) explain all prominent risk paradoxes and offer excellent out-of-sample predictions using only two parameters. Furthermore, by using value and weights functions, GDT can generalize CPT, EUT, and RDU. While CPT and RDU are linear functions of weights and value functions, their one-parameter generalization by GDT is convex in weights and concave in value functions, denoted as meta-risk averse. For example, GDT, as a function of RDU, can be strict meta-risk averse while RDU can be strictly risk prone. We illustrate that the parametric GDT model can be easily assessed and validated. A three-parameter GDT model explains all risk paradoxes and is 200% superior to CPT in out-of-sample predictions. Finally, we show how different heterogeneous risk data can be decomposed into four types of predictable patterns of convex, concave, concave bowl, and convex bowl patterns.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call