Abstract

Given the rapid reductions in human mortality observed over recent decades and the uncertainty associated with their future evolution, there have been a large number of mortality projection models proposed by actuaries and demographers in recent years. However, many of these suffer from being overly complex, thereby producing spurious forecasts, particularly over long horizons and for small, noisy datasets. In this paper, we exploit statistical learning tools, namely group regularisation and cross validation, to provide a robust framework to construct such discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular datasets. Most importantly, this approach produces bespoke models using a trade-off between complexity (to draw as much insight as possible from limited datasets) and parsimony (to prevent overfitting to noise), with this trade-off designed to have specific regard to the forecasting horizon of interest. This is illustrated using both empirical data from the Human Mortality Database and simulated data, using code that has been made available within a user-friendly open-source R package StMoMo

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.