Abstract

The generalized linear model (GLM) is a well developed statistical model widely used in actuarial practice for insurance ratemaking, risk classification, and reserving. Recently, there has been an explosion of data mining techniques to refine statistical models for better variable selection procedure and for improved prediction accuracy. Such techniques include the increased interest in regularization techniques, or penalized likelihood, to achieve these goals. In this paper, we explore the idea of Least Absolute Shrinkage and Selection Operator (LASSO) in a Bayesian framework within a dependent frequency-severity model as a refinement to the dependent GLM developed by Garrido et al. (2016). The LASSO technique is a penalized least squares procedure developed by Tibshirani (1996) and is extended to a Bayesian interpretation framework by Park and Casella (2008). We show that a new penalty function, which we call log-adjusted absolute deviation (LAAD), emerges if we further theoretically extend the Bayesian LASSO using conjugate hyperprior distributional assumptions. While this result has the ease of implementation for variable selection and prediction, we recognize that the use of least squares has been poorly viewed in insurance ratemaking. Instead however, we modify the setting to a penalized dependent GLM within this extended Bayesian LASSO framework. Within such framework, the regression estimates are derived by optimizing a penalized likelihood assuming a hyperprior distribution for the L1 penalty parameter lambda. This has the advantage of obtaining a consistent estimator for the regression coefficients as well as allowing variable selection. We calibrated our proposed model using an auto insurance dataset from a Singapore insurance company where we have observed claim counts and amounts from a portfolio of policyholders.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call