Abstract

Path planning is a key requirement for mobile robots employed for different tasks such as rescue or transport missions. Conventional methods such as A* or Dijkstra to tackle path planning problem need a premise map of the robot's environment. Nowadays, dynamic path planning is a popular research topic, which drives mobile robots without prior static requirements. Deep reinforcement learning (DRL), which is another popular research area, is being harnessed to solve dynamic path planning problem by the researchers. In this study, Deep Q-Networks, which is a subdomain of DRL are opted to solve dynamic path planning problem. We first employ well known techniques Double Deep Q-Networks (D2QN) and Dueling Double Deep Q-Networks (D3QN) to train a model which can drive a mobile robot in environments with static and dynamic obstacles within 3 different configurations. Then we propose D3QN with Prioritized Experience Replay (PER) extension in order to further optimize the DRL model. We created a test bed to measure the performance of the DRL models against 99 randomly generated goal locations. According to our experiments, D3QN-PER method performs better than D2QN and D3QN in terms of path length and travel time to the goal without any collisions. Robot Operating System and Gazebo simulation environment is utilized to realize the training and testing environments, thus, the trained DRL models can be deployed to any ROS compatible robot seamlessly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call