Abstract

Encouraging the agent to explore has become a hot topic in the field of reinforcement learning (RL). The popular approaches to engage in exploration are mainly by injecting noise into neural network (NN) parameters or by augmenting additional intrinsic motivation term. However, the randomness of injecting noise and the metric for intrinsic reward must be chosen manually may make RL agents deviate from the optimal policy during the learning process. To enhance the exploration ability of agent and simultaneously ensure the stability of parameter learning, we proposed a novel proximal parameter distribution optimization (PPDO) algorithm. On the one hand, PPDO enhances the exploration ability of RL agent by transforming NN parameter from a certain single value to a function distribution. On the other hand, PPDO accelerates the parameter distribution optimization by setting two groups of parameters. The parameter optimization is guided by evaluating the parameter quality change before and after the parameter distribution update. In addition, PPDO reduces the influence of bias and variance on the value function approximation by limiting the amplitude of the two consecutive parameter updates, which can enhance the stability of the parameter distribution optimization. Experiments on the OpenAI Gym, Atari, and MuJoCo platforms indicate that PPDO can improve the exploration ability and learning efficiency of deep RL algorithms, including DQN and A3C.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.