Abstract

In stock prediction problems, deep ensemble models are better adapted to dynamically changing stock market environments compared to single time-series networks. However, the existing ensemble models often underutilize real-time market feedback for effective supervision, and base models are pre-trained and fixed in their optimization, which makes them lack adaptability for evolving market environments. To address this issue, we propose a deep-reinforcement-learning-based dynamic ensemble model for stock prediction (DRL-DEM). Firstly, we employ deep reinforcement learning to optimize the weights of deep-learning-based time-series models. Secondly, existing deep-reinforcement-learning methods only consider environmental rewards. Thus we improve the reward function by introducing real-time investment returns as additional feedback signals for the deep-reinforcement-learning algorithm. Finally, an alternating iterative algorithm is used to simultaneously train the base predictors and the deep-reinforcement-learning model, allowing DRL-DEM to fully utilize the supervised information for global coordinated optimization. The experimental results show that in SSE 50 and NASDAQ 100 datasets, the mean square error (MSE) of the proposed method reached 0.011 and 0.005, the Sharpe ratio (SR) reached 2.20 and 1.53, and the cumulative return (CR) reached 1.38 and 1.21. Compared with the best results in the recent model, MSE decreased by 21.4% and 28.6%, SR increased by 81.8% and 82.1%, and CR increased by 89.0% and 89.1%, with higher forecasting accuracy and stronger investment return capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call