Abstract
In this article, a novel coupled policy improvement mechanism is developed for improving policy iteration (PI) algorithms. In contrast to the common PI, the developed dual parallel policy iteration (DPPI) with coupled policy improvement mechanism consists of two parallel PIs. At each PI step, the performances of the two parallel policies are evaluated and the better one is defined as the dominant policy. Then, the dominant policy is used to guide the parallel policy improvement in a soft manner by constraining the Kullback-Liebler (KL) divergence between the dominant policy and the policy to be updated. It is proven that the convergence of DPPI can be guaranteed under the designed coupled policy improvement mechanism. Moreover, it is clearly shown that under certain conditions, the Q -functions of the two new policies obtained in each parallel policy improvement are larger than those of all the previous dominant policies, which is conductive to accelerate the PI process and improve the policy learning efficiency to some extent. Furthermore, by combining DPPI with the twin delay deep deterministic (TD3) policy gradient, we propose a reinforcement learning (RL) algorithm: parallel TD3 (PTD3). Experimental results on continuous-action control tasks in the MuJoCo and OpenAI Gym platforms show that the proposed PTD3 outperforms the state-of-the-art RL algorithms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.