Abstract A common statistical modelling paradigm used in actuarial pricing is (a) assuming that the possible loss model can be chosen from a dictionary of standard models; (b) selecting the model that provides the best trade-off between goodness of fit and complexity. Machine learning provides a rigorous framework for this selection/validation process. An alternative modelling paradigm, common in the sciences, is to prove the adequacy of a statistical model from first principles: for example, Planck’s distribution, which describes the spectral distribution of blackbody radiation empirically, was explained by Einstein by assuming that radiation is made of quantised harmonic oscillators (photons). In this working party we have been exploring the extent to which loss models, too, can be derived from first principles. Traditionally, the Poisson, negative binomial, and binomial distributions are used as loss count models because they are familiar and easy to work with. We show how reasoning from first principles naturally leads to non-stationary Poisson processes, Lévy processes, and multivariate Bernoulli processes depending on the context. For modelling severities, we build on previous research that shows how graph theory can be used to model property-like losses. We show how the methodology can be extended to deal with business interruption/supply chain risks by considering networks with higher-order dependencies. For liability business, we show the theoretical and practical limitations of traditional models such as the lognormal distribution. We explore the question of where the ubiquitous power-law behaviour comes from, finding a natural explanation in random growth models. We also address the derivation of severity curves in territories where compensation tables are used. This research is foundational in nature, but its results may prove useful to practitioners by guiding model selection and elucidating the relationship between the features of a risk and the model’s parameters.
Read full abstract