Abstract
AbstractThe Lee–Carter model has become a benchmark in stochastic mortality modeling. However, its forecasting performance can be significantly improved upon by modern machine learning techniques. We propose a convolutional neural network (NN) architecture for mortality rate forecasting, empirically compare this model as well as other NN models to the Lee–Carter model and find that lower forecast errors are achievable for many countries in the Human Mortality Database. We provide details on the errors and forecasts of our model to make it more understandable and, thus, more trustworthy. As NN by default only yield point estimates, previous works applying them to mortality modeling have not investigated prediction uncertainty. We address this gap in the literature by implementing a bootstrapping-based technique and demonstrate that it yields highly reliable prediction intervals for our NN model.
Highlights
Lee and Carter (1992) propose a seminal stochastic mortality model, the Lee– Carter (LC) model, in which they decompose logarithmic death rates into an age-specific base level and a time-varying component multiplied by an age-modulating parameter
We show in an empirical study that it produces reliable and informative prediction interval estimates for the convolutional neural networks (CNN), whereas the intervals obtained from the standard LC approach fail to contain the target values as often as required
We train the neural network (NN) models on the whole available age range x ∈ X = Xin := {0, . . . , 100} to make use of all data during training, and we usually evaluate them on the ages x ∈ Xout := {60, . . . , 89}, which are most relevant for annuity payments and often considered in actuarial mortality forecasting applications
Summary
Lee and Carter (1992) propose a seminal stochastic mortality model, the Lee– Carter (LC) model, in which they decompose logarithmic death rates into an age-specific base level and a time-varying component (period effect) multiplied by an age-modulating parameter (age effect). FFNN have been applied to mortality forecasting by Shah and Guez (2009) and more recently Richman and Wüthrich (2021), who provide a review of existing stochastic multi-population mortality models and point out some of their drawbacks: Sometimes, they are difficult to calibrate, and some model structures are hard to justify and lack theoretical foundations They propose to refrain from making any structural assumptions at all on mortality development and fully rely on an NN to learn mortality intensities from historical data. Wang et al (2021) consider CNN with two-dimensional convolutions as well and show that they produce more accurate one-step point forecasts than classical stochastic mortality models Despite their typically stronger predictive performance, practitioners do not always prefer NN because they are hard to interpret.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.