Abstract

Proximal policy optimization (PPO) algorithm is a deep reinforcement learning algorithm with outstanding performance, especially in continuous control tasks. But the performance of this method is still affected by its exploration ability. Based on continuous control tasks, this paper analyzes the original Gaussian action exploration mechanism in PPO algorithm, and clarifies the influence of exploration ability on performance. Afterward, aiming at the problem of exploration, an exploration enhancement mechanism based on uncertainty estimation is designed in this paper. Then, we apply exploration enhancement theory to PPO algorithm and propose the proximal policy optimization algorithm with intrinsic exploration module (IEM-PPO). In the experimental parts, we evaluate our method on multiple tasks in MuJoCo phsysical simulator, and compare IEM-PPO algorithm with PPO and PPO with intrinsic curiosity module (ICM-PPO). The experimental results demonstrate that IEM-PPO algorithm performs better in terms of sample efficiency and cumulative reward, and has stability and robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call