Abstract

AbstractThe path planning problem of mobile robot in unknown dynamic environment (UDE) is discussed in this article by building a continuous dynamic simulation environment. To achieve a collision‐free path in UDE, the reinforcement learning theory with deep Q‐network (DQN) is applied for the mobile robot to learn optimal decisions. A reward function is designed with weight to balance the obstacle avoidance and the approach to the goal. Moreover, it is found that the relative motion between moving obstacles and robots may cause abnormal rewards and further lead to a collision between robot and obstacle. To address this problem, two reward thresholds are set to modify the abnormal rewards, and the experiments shows that the robot can avoid all obstacles and reach the goal successfully. Finally, double DQN (DDQN) and dueling DQN are applied in this article. This article compares the results of reward‐modified DQN (RMDQN), reward‐modified DDQN (RMDDQN), dueling RMDQN, and dueling RMDDQN and concludes that the result of RMDDQN is the best.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call