Abstract

Coverage path planning is an important problem with numerous practical applications. Many solutions to this problem have been proposed over the past few years. However, most existing methods assume that the robot or the agent possesses an a priori map of the environment. This assumption is not viable in many real-world situations, thereby limiting the applicability of these methods. Inspired by the recent success of deep reinforcement learning (DRL) in different control tasks, in this work, we propose a novel CPP algorithm, Adaptive Deep BA* (AD-BA*), which combines deep reinforcement learning with the traveling salesman problem to identify an optimal coverage path in an initially unknown environment. The results of our simulation in various 2D mazes indicate that our approach learns the optimal coverage path in a sample efficient manner by minimizing the overall overlap extent while always maintaining significant coverage. Our experimental results indicate that, on average, AD-BA* achieves a) 31.67% and 7.29% lower overlap percentage in room-sized environments and b) 17.77% and 6.39% lower overlap percentage in larger environments when compared to the state-of-the-art algorithms, AD Path and BA* respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call