Abstract
This article sets forth a framework for deep reinforcement learning as applied to trading cryptocurrencies. Specifically, the authors adopt Q-Learning, which is a model-free reinforcement learning algorithm, to implement a deep neural network to approximate the best possible states and actions to take in the cryptocurrency market. Bitcoin, Ethereum, and Litecoin were selected as representatives to test the model. The Deep Q trading agent generated an average portfolio return of 65.98%, although it showed extreme volatility over the 2,000 runs. Despite the high volatility of deep reinforcement learning, the experiment demonstrates that it has exceptionally high potential to be employed and provides a solid foundation on which to build further research. <b>TOPICS:</b>Currency, big data/machine learning, performance measurement <b>Key Findings</b> ▪ The authors use deep neural networks to create a Deep Q-Learning trading agent that approximates the best actions to take based on rewards to maximize returns from trading the three cryptocurrencies with the largest market capitalization. ▪ The Deep Q-Learning agent generates a return of 65.98% on average over the course of 2,000 episodes; however, the returns do exhibit a large standard deviation given the highly volatile nature of the cryptocurrencies. ▪ The authors introduce a framework on which future deep reinforcement learning and rewards-based trading agents can be built and improved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.