Abstract
It is a trend for robots to replace human in industrial fields with the increment of labor cost. Mobile robots are widely used for executing tasks in harsh industrial environment. It is an important problem for mobile robots to plan their path in unknown environment. The ordinary deep Q-network (DQN) which is an efficient method of reinforcement learning has been used for mobile robot path planning in unknown environment, but the DQN generally has low convergence speed. This paper presents a method based on Double DQN (DDQN) with prioritized experience replay (PER) for mobile robot path planning in unknown environment. With sensing its surrounding local information, the mobile robot plans its path with this method in unknown environment. The experiment results show that the proposed method has higher convergence speed and success rate than the normal DQN method at the same experimental environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.