Abstract

Deep reinforcement learning (DRL) recently has attained remarkable results in various domains, including games, robotics, and recommender system. Nevertheless, an urgent problem in the practical application of DRL is fast adaptation. To this end, this article proposes a new and versatile metalearning approach called fast task adaptation via metalearning (FTAML), which leverages the strengths of the model-based methods and gradient-based metalearning methods for training the initial parameters of the model, such that the model is able to efficiently master unseen tasks with a little amount of data from the tasks. The proposed algorithm makes it possible to separate task optimization and task identification, specifically, the model-based learner helps to identify the pattern of a task, while the gradient-based metalearner is capable of consistently improving the performance with only a few gradient update steps through making use of the task embedding produced by the model-based learner. In addition, the choice of network for the model-based learner in the proposed method is also discussed, and the performance of networks with different depths is explored. Finally, the simulation results on reinforcement learning problems demonstrate that the proposed approach outperforms compared metalearning algorithms and delivers a new state-of-the-art performance on a variety of challenging control tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.