Abstract

The classical autoregressive moving average (ARMA) and generalized autoregressive conditional heteroskedastic (GARCH) models have been widely adopted to forecast Value-at-Risk (VaR), the most popular risk measure in quantitative risk management. Their parameters are commonly estimated using the maximum likelihood estimation (MLE) method, which may show biasedness. This reason motivated us to require optimization through a genetic algorithm (GA) or implement artificial intelligence (AI) to improve the VaR forecast. In this paper, we aimed to compare the performances of the GA-based ARMA-GARCH model and the AI models (linear regression, support vector regression, multilayer perceptron, and long short-term memory) in forecasting the VaR. More specifically, we examined which model performs best. Using Bitcoin, crude oil, and stock index returns, we showed that the GA-ARMA-GARCH and AI models outperformed the benchmark MLE-ARMA-GARCH model. We also revealed that the AI models (particularly multilayer perceptron) integrated with kernel density estimation (KDE) performed relatively better than the GA-ARMA-GARCH model because the former resulted in a higher p-value of the VaR backtesting with a smaller error value. In addition, the cross combination between our models was found to have a higher potential to produce an improvement in the VaR forecast.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call