Abstract
Recent research on deep reinforcement learning (RL) has shown its capability in automating sequential decision-making and control tasks. However, RL agents require a large number of interactions with the environment and are lacking in transferability. These problems severely restrict the adoption of deep RL in real-world tasks. Taking the learning process of human beings as inspiration, a promising way to solve these problems is to integrate transferable domain knowledge into deep RL. In this work, we propose a method called Deep Q-learning with transferable Domain Rules (DQDR) that combines transferable domain knowledge to enhance the sample efficiency and transferability of RL algorithms. We extract domain knowledge from human beings and express it into a set of rules, then couple this knowledge with the deep Q-network (DQN). The experiments are conducted by comparing this DQDR with other proposed knowledge-based methods and applying this approach to a series of CartPole and FlappyBird tasks with different system dynamics. The empirical results show that our approach can accelerate the learning process and improve the transfer capability of RL algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.