Abstract

We extend recurrent neural networks to include several flexible timescales for each dimension of their output, which mechanically improves their abilities to account for processes with long memory or highly disparate timescales. We compare the ability of vanilla and extended long short-term memory networks (LSTMs) to predict the intraday volatility of a collection of equity indices known to have a long memory. Generally, the number of epochs needed to train the extended LSTMs is divided by about two, while the variation in validation and test losses among models with the same hyperparameters is much smaller. We also show that the single model with the smallest validation loss systemically outperforms rough volatility predictions for the average intraday volatility of equity indices by about 20% when trained and tested on a dataset with multiple time series.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call