Abstract

Path planning in an unknown environment is a basic task for mobile robots to complete tasks. As a typical deep reinforcement learning, deep Q-network (DQN) algorithm has gained wide popularity in path planning tasks due to its self-learning and adaptability to complex environment. However, most of path planning algorithms based on DQN spend plenty of time for model training and the learned model policy depends only on the information observed by sensors. It will cause poor generalization capability for the new task and time waste for model retraining. Therefore, a new deep reinforcement learning method combining DQN with prior knowledge is proposed to reduce training time and enhance generalization capability. In this method, a fuzzy logic controller is designed to avoid the obstacles and help the robot avoid blind exploration for reducing the training time. A target-driven approach is used to address the lack of generalization, in which the learned policy depends on the fusion of observed information and target information. Extensive experiments show that the proposed algorithm converges faster than DQN algorithm in path planning tasks and the target can be reached without retraining when the path planning task changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call