Abstract

Deep reinforcement learning (DRL) has made remarkable strides in empowering computational models to tackle intricate decision-making tasks. In quantitative trading, DRL trading agents have emerged as a means to optimize decisions across diverse market scenarios, culminating in developing profitable trading strategies by assimilating knowledge from past experiences. This study introduces an innovative trading system centered around the Deep Q-Network (DQN) algorithm called Extended Trading DQN (ETDQN). ETDQN stands out by its ability to adapt its learning process to trade effectively across varying market conditions, with feedback received exclusively upon trade liquidation. This contrasts with models that inundate agents with continuous feedback signals. ETDQN leverages distributional learning and several other independent extensions to enhance its DRL capabilities, streamlining its decision-making process. The model accomplishes this by prioritizing experiences encompassing diverse sub-objectives, facilitating the accumulation of maximum profit while obviating the need for intricate reward fine-tuning. Through extensive training on three distinct financial time series signals, ETDQN demonstrates its proficiency in identifying trading opportunities, particularly during periods of heightened price volatility. Notably, the model exhibits a more assertive approach towards managing annual returns volatility compared to the conventional DQN model, outperforming it by a factor of 1.46 and 7.13 concerning average daily cumulative returns, as evidenced in the historical data of Western Digital Corporation and the Cosmos cryptocurrency, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call