Abstract

Recurrent reinforcement learning (RRL) is a machine learning algorithm which has been proposed by researchers for constructing financial trading platforms. When an analysis of RRL trading performance is conducted using low frequency financial data (e.g. daily data), the weakening autocorrelation in price changes may lead to a decrease in trading profits as compared to its applications in high frequency trading. There therefore is a need to improve RRL for the purposes of daily equity trading. This paper presents two parameter update schemes (the `average elitist' and the `multiple elitist') for RRL. The purpose of the first scheme is to improve out-of-sample performance of RRL-type trading systems. The second scheme aims to exploit serial dependence in stock returns to improve trading performance, when traders deal with highly correlated stocks. Profitability and stability of the trading system are examined by using four groups of S&P stocks for the period January 2009 to December 2012. It is found that the Sharpe ratios of the stocks increase after we use the two parameter update schemes in the RRL trading system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call