Abstract

Chapter Preview. Actuaries have been studying loss distributions since the emergence of the profession. Numerous studies have found that the widely used distributions, such as lognormal, Pareto, and gamma, do not fit insurance data well. Mixture distributions have gained popularity in recent years because of their flexibility in representing insurance losses from various sizes of claims, especially on the right tail. To incorporate the mixture distributions into the framework of popular generalized linear models (GLMs), the authors propose to use finite mixture models (FMMs) to analyze insurance loss data. The regression approach enhances the traditional whole-book distribution analysis by capturing the impact of individual explanatory variables. FMM improves the standard GLM by addressing distribution-related problems, such as heteroskedasticity, over- and underdispersion, unobserved heterogeneity, and fat tails. A case study with applications on claims triage and on high-deductible pricing using workers’ compensation data illustrates those benefits. Introduction Conventional Large Loss Distribution Analysis Large loss distributions have been extensively studied because of their importance in actuarial applications such as increased limit factor and excess loss pricing (Miccolis, 1977), reinsurance retention and layer analysis (Clark, 1996), high deductible pricing (Teng, 1994), and enterprise risk management (Wang, 2002). Klugman et al. (1998) discussed the frequency, severity, and aggregate loss distributions in detail in their book, which has been on the Casualty Actuarial Society syllabus of exam Construction and Evaluation of Actuarial Models for many years. Keatinge (1999) demonstrated that popular single distributions, including those in Klugman et al. (1998), are not adequate to represent the insurance loss well and suggested using mixture exponential distributions to improve the goodness of fit. Beirlant et al. (2001) proposed a flexible generalized Burr-gamma distribution to address the heavy tail of loss and validated the effectiveness of this parametric distribution by comparing its implied excess-of-loss reinsurance premium with other nonparametric and semi-parametric distributions. Matthys et al. (2004) presented an extreme quantile estimator to deal with extreme insurance losses. Fleming (2008) showed that the sample average of any small data from a skewed population is most likely below the true mean and warned the danger of insurance pricing decisions without considering extreme events. Henry and Hsieh (2009) stressed the importance of understanding the heavy tail behavior of a loss distribution and developed a tail index estimator assuming that the insurance loss possess Pareto-type tails.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.