Abstract

Reinforcement learning has been applied to various types of financial assets trading, such as stocks, futures, and cryptocurrencies. Options, as a novel kind of derivative, have their characteristics. Because there are too many option contracts for one underlying asset and their price behavior is different. Besides, the validity period of an option contract is relatively short. To apply reinforcement learning to options trading, we propose the options trading reinforcement learning (OTRL) framework. We use options’ underlying asset data to train the reinforcement learning model. Candle data in different time intervals are utilized, respectively. The protective closing strategy is added to the model to prevent unbearable losses. Our experiments demonstrate that the most stable algorithm for obtaining high returns is proximal policy optimization (PPO) with the protective closing strategy. The deep Q network (DQN) can exceed the buy and hold strategy in options trading, as can soft actor critic (SAC). The OTRL framework is verified effectively.

Highlights

  • In recent years, reinforcement learning (RL) emerges as an effective way to trade financial assets, such as stocks, futures, and cryptocurrencies

  • Reinforcement learning is an area of machine learning concerned with how intelligent agents should take actions in an environment to maximize the notion of cumulative reward [17]

  • PPO is motivated by the same question of trust region policy optimization (TRPO) [32]

Read more

Summary

Introduction

Reinforcement learning (RL) emerges as an effective way to trade financial assets, such as stocks, futures, and cryptocurrencies. Reference [9] uses a time-series momentum and technical indicators to train DQN, policy gradient (PG), and advantage actor-critic (A2C) model They tested the methods on 50 liquid futures contracts and found that the RL algorithms deliver profits even under high transaction costs. Our study has research significance and application meaning in options algorithmic trading It has a reference value for other financial assets trading and training data augmentation problems. The experimental results showed that the proposed model, with the protective close strategy, can get decent returns, compared to the Buy and Hold strategy in options trading, which indicates that our model learns ways to make profit from underlying asset data and applies it to options trading.

Reinforcement Learning and Algorithmic Trading
Data Augmentation Methods for Time Series
Basic Characteristics of Options
Methodology
Training Data for Options Trading
Proposed OTRL Framework
Q-Learning and DQN
Proximal Policy Optimization
Soft Actor-Critic
Protective Closing Strategy
Experiment Result
Training Result
Validation Result
Trading Performance with Protective Closing Strategies
Trading Performance on Options
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.