Abstract
Many studies have employed reinforcement learning (RL) techniques to successfully create portfolio strategies in recent years. However, since financial markets are extremely noisy, past research has found it challenging to train a stable RL agent using historical data. In this work, we first apply a role-aware multi-agent system to model volatile security markets. Three major roles that are used in our system are presented, and while maximizing their own targets in the Taiwan stock exchange (TWSE) historical data, they also observe trading behavior and compete with other agents. To build a trading strategy, we construct a student–teacher framework in which multi-agent targeting distills the market information and a student RL model is taught using the distilled target. The results show that our method is capable of developing profitable strategies in a quickly changing financial market. In addition, our market distilling technique has the potential to develop a flexible asset allocation strategy by using different student networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.