Abstract

Financial markets have experienced several negative sigma events in recent years; these events occur with much more regularity than current risk models can predict. There is no guarantee that the training set's data generating process will be the same in the test set in finance. Mathematical models are designed to operate with unlimited and changing data, and yet, actual events keep making life hard for most models. The assumption of independent and identically distributed random variables and a stationary time series do not hold in reality. Over-reliance on historical data and backtesting of models is not a sufficient approach to overcome these challenges. Reinforcement-learning faces similar challenges when applied to financial time series. Out-of-distribution generalization is a problem that cannot be solved without assumptions on the data generating process. If the test data is arbitrary or unrelated to the training data, then generalization is not possible. Finding these particular principles could potentially help us build AI and financial modeling systems. N-Beats, Oreshkin et al. [2020], is a deep neural architecture with backward and forward residual links and a deep stack of fully-connected layers. N-Beats can be considered as a meta-learning model for time series prediction. Meta-Learning is a machine learning approach that intends to design models that can learn new skills or adapt to new environments rapidly with few training examples. We explore the performance of N-Beats and compare its performance with other deep learning models. The results are not conclusive in establishing N-Beats as a better model than the other models tested in this study. We show in this study that other neural network-based models offer similar performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call