Abstract

Many studies have successfully used reinforcement learning (RL) to train an intelligent agent that learns profitable trading strategies from financial market data. Most of RL trading studies have simplified the effect of the actions of the trading agent on the market state. The trading agent is trained to maximize long-term profit by optimizing fixed historical data. However, such approach frequently results in the trading performance during out-of-sample validation being considerably different from that during training. In this paper, we propose a multi-agent virtual market model (MVMM) comprised of multiple generative adversarial networks (GANs) which cooperate with each other to reproduce market price changes. In addition, the action of the trading agent can be superimposed on the current state as the input of the MVMM to generate an action-dependent next state. In this research, real historical data were replaced with the simulated market data generated by the MVMM. The experimental results indicated that the trading strategy of the trained RL agent achieved a 12% higher profit and exhibited low risk of loss in the 2019 China Shanghai Shenzhen 300 stock index futures backtest.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.