Abstract

In the financial literature, there is great interest in the prediction of stock prices. Stock prediction is necessary for the creation of different investment strategies, both speculative and hedging ones. The application of neural networks has involved a change in the creation of predictive models. In this paper, we analyze the capacity of recurrent neural networks, in particular the long short-term recurrent neural network (LSTM) as opposed to classic time series models such as the Exponential Smooth Time Series (ETS) and the Arima model (ARIMA). These models have been estimated for 284 stocks from the S&P 500 stock market index, comparing the MAE obtained from their predictions. The results obtained confirm a significant reduction in prediction errors when LSTM is applied. These results are consistent with other similar studies applied to stocks included in other stock market indices, as well as other financial assets such as exchange rates.

Highlights

  • The prediction of stock returns has been widely studied in the financial literature

  • It is intended to confirm the e iciency of a long short-term recurrent neural network (LSTM) Neural Network, as opposed to some classic models applied to time series

  • It is going to be compared with an Exponential Smooth Time Series model and an Arima model (ARIMA) model

Read more

Summary

Introduction

The prediction of stock returns has been widely studied in the financial literature. One of the main objectives is the construction of stock portfolios. In Qiu and Song (2016), for example, the authors present the analysis of the prediction or the direction of stock price index for the Nikkei 225 stock market index In this case, they use a backpropagation neural network with two di erent types of inputs to determine what kind of information improves results. Other studies confirm these results, like the paper by Rather, Agarwal, and Sastry (2015) who analyse 25 stock returns from Bombay stock exchange indicating that this model is capable of capturing non-linear patterns more e iciently than classical models In this case it is concluded that the RNN learning process improves as it needs to look for smaller weights.

Data and Methods
Exponential smoothing model
Autoregressive moving average model
Recurrent neural network
Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call