Abstract

To improve the efficiency of deep reinforcement learning (DRL)-based methods for robotic path planning in the unstructured environment with obstacles, we propose a Guided Deep Reinforcement Learning (GDRL) for path planning of robotic manipulators. Firstly, we introduce guided path planning to accelerate approaching process. Secondly, we design a brand-new dense reward function in DRL-based path planning. To further improve learning efficiency, the DRL agent is only trained for collision avoidance, rather than for the whole path planning process. Many useless explorations in RL process can be eliminated with these three ideas. In order to evaluate the proposal, a Franka Emika robot with 7 joints has been considered in simulator V-Rep. The simulation results show the effectiveness of the proposed GDRL method. Compared to the pure DRL method, the GDRL method has much fewer training episodes, and converges 4× faster.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call