Abstract

Designing a profitable trading strategy plays a critical role in algorithmic trading, where the algorithm can manage and execute automated trading decisions. Determining a specific trading rule for trading at a particular time is a critical research problem in financial market trading. However, an intelligent, and a dynamic algorithmic trading driven by the current patterns of a given price time-series may help deal with this issue. Thus, Reinforcement Learning (RL) can achieve optimal dynamic algorithmic trading by considering the price time-series as its environment. A comprehensive representation of the environment states is indeed vital for proposing a dynamic algorithmic trading using RL. Therefore, we propose a representation of the environment states using the Directional Change (DC) event approach with a dynamic DC threshold. We refer to the proposed algorithmic trading approach as the DCRL trading strategy. In addition, the proposed DCRL trading strategy was trained using the Q-learning algorithm to find an optimal trading rule. We evaluated the DCRL trading strategy on real stock market data (S&P500, NASDAQ, and Dow Jones, for five years period from 2015-2020), and the results demonstrate that the DCRL state representation policies obtained more substantial trading returns and improved the Sharpe Ratios in a volatile stock market. In addition, a series of performance analyses demonstrate the robust performance and extensive applicability of the proposed DCRL trading strategy.

Highlights

  • Developing algorithmic trading strategies that can make timely stock trading decisions has always been a subject of interest for investors and financial analysts

  • Experiment and Results we discuss a series of experiments conducted with the proposed DCRL algorithmic trading strategies, including the datasets used, performance evaluation metrics, benchmarks, experimental settings, and trading performance results

  • We evaluated three aspects of the proposed DCRL and QDCRL algorithmic trading strategies, i.e., trading performance profitability and effectiveness, as well as adaptability and efficiency of the dynamic threshold Directional Change (DC) event approach for the Reinforcement Learning (RL) environment state representation

Read more

Summary

INTRODUCTION

Developing algorithmic trading strategies that can make timely stock trading decisions has always been a subject of interest for investors and financial analysts. The DCRL model learns the states of the price time-series to find the optimal dynamic threshold for DC event analysis. It uses the RL decision-making algorithm to make decisions and take the most appropriate trading action. We contribute to the financial market literature by designing and developing an algorithmic trading strategy that is suitable for stock markets by improving the RL environment state representation and action decision-making to ensure stable trading returns even in the case of volatile price time-series. The proposed algorithmic trading considers sequential DC event recognition in the price timeseries process using the dynamic DC threshold This model can support decision-makers to determine optimal trading opportunities to maximize profits.

RELATED WORKS
Evaluation metrics
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.