Abstract

It is a trend for robots to replace human in industrial fields with the increment of labor cost. Mobile robots are widely used for executing tasks in harsh industrial environment. It is an important problem for mobile robots to plan their path in unknown environment. The ordinary deep Q-network (DQN) which is an efficient method of reinforcement learning has been used for mobile robot path planning in unknown environment, but the DQN generally has low convergence speed. This paper presents a method based on Double DQN (DDQN) with prioritized experience replay (PER) for mobile robot path planning in unknown environment. With sensing its surrounding local information, the mobile robot plans its path with this method in unknown environment. The experiment results show that the proposed method has higher convergence speed and success rate than the normal DQN method at the same experimental environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call