Abstract

Portfolio managements in financial markets involve risk management strategies and opportunistic responses to individual trading behaviours. Optimal portfolios constructed aim to have a minimal risk with highest accompanying investment returns, regardless of market conditions. This paper focuses on providing an alternative view in maximising portfolio returns using Reinforcement Learning (RL) by considering dynamic risks appropriate to market conditions through dynamic portfolio rebalancing. The proposed algorithm is able to improve portfolio management by introducing the dynamic rebalancing of portfolios with vigorous risk through an RL agent. This is done while accounting for market conditions, asset diversifications, risk and returns in the global financial market. Studies have been performed in this paper to explore four types of methods with variations in fully portfolio rebalancing and gradual portfolio rebalancing, which combine with and without the use of the Long Short-Term Memory (LSTM) model to predict stock prices for adjusting the technical indicator centring. Performances of the four methods have been evaluated and compared using three constructed financial portfolios, including one portfolio with global market index assets with different risk levels, and two portfolios with uncorrelated stock assets from different sectors and risk levels. Observed from the experiment results, the proposed RL agent for gradual portfolio rebalancing with the LSTM model on price prediction outperforms the other three methods, as well as returns of individual assets in these three portfolios. The improvements of the returns using the RL agent for gradual rebalancing with prediction model are achieved at about 27.9–93.4% over those of the full rebalancing without prediction model. It has demonstrated the ability to dynamically adjust portfolio compositions according to the market trends, risks and returns of the global indices and stock assets.

Highlights

  • In modern portfolio theory, portfolio optimisation is one of the objectives to maximise returns of a portfolio while minimising risks using diversification methods [1, 2]

  • This paper focuses on providing an alternative view in maximising portfolio returns using Reinforcement Learning (RL) by considering dynamic risks appropriate to market conditions through dynamic portfolio rebalancing

  • The proposed RL agent aims to improve returns of the portfolio Net Asset Value (NAV), by exploring four methods using a combination of full portfolio rebalancing, gradual portfolio rebalancing, without price prediction model, and with Long Short-Term Memory (LSTM) price prediction models

Read more

Summary

Introduction

Portfolio optimisation is one of the objectives to maximise returns of a portfolio while minimising risks using diversification methods [1, 2]. Jeong and Kim [41] combine RL and a deep neural network (DNN) for prediction by adding DNN regressor to a deep Q-network, with experiments conducted on four different stock indices individually: S&P500, KOSPI, HSI, and EuroStoxx50 It enables the predictions with different number of shares for each asset for the first time, compared to trading with fixed number of shares of other methods, that increases the trading profits. The proposed RL agent aims to improve returns of the portfolio Net Asset Value (NAV), by exploring four methods using a combination of full portfolio rebalancing, gradual portfolio rebalancing, without price prediction model, and with Long Short-Term Memory (LSTM) price prediction models These four approaches will be presented and compared using three constructed financial portfolios.

Configurations of RL
Observable period
Action
Reward structure
RL agent
Proposed RL for dynamic rebalancing
Experiment set-up
Proposed RL with full portfolio rebalancing method
Market information lag
LSTM price prediction model
Full portfolio rebalancing with LSTM price prediction
Discussions
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.