Abstract

Traditional reinforcement learning methods are only applicable to single-scenario tasks. When it comes to multi-scenario, the single-scenario agents fail to perform well. That is, the traditional reinforcement learning methods own the poor generalization when facing different tasks simultaneously. In this work, we propose a practical deep reinforcement learning framework that can perform on multiple 3D scenarios concurrently. We adopt the Actor–Learner framework to realize the parallelization of multiple scenarios and resolve the policy lag problem by generalizing Retrace(λ) to a new value function. We prove its convergence theoretically. Besides, we design an auxiliary recognition task and an auxiliary control task inspired by the hard shared representation in multi-task learning to improve the performance of our multi-scenario agent. Experimental results show that our method outperforms state-of-the-art algorithms on DMLab-30, achieving more advantages on multi-scenario games. We verify the effectiveness of each part of our framework by the ablation experiments. We also find our parallel learner transferable by testing on the untrained scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.