Abstract
Autonomous exploration is an important application of multi-vehicle systems, where a team of networked robots are coordinated to explore an unknown environment collaboratively. This technique has earned significant research interest due to its usefulness in search and rescue, fault detection and monitoring, localization and mapping, etc. In this paper, a novel cooperative exploration strategy is proposed for multiple mobile robots, which reduces the overall task completion time and energy costs compared to conventional methods. To efficiently navigate the networked robots during the collaborative tasks, a hierarchical control architecture is designed which contains a high-level decision making layer and a low-level target tracking layer. The proposed cooperative exploration approach is developed using dynamic Voronoi partitions, which minimizes duplicated exploration areas by assigning different target locations to individual robots. To deal with sudden obstacles in the unknown environment, an integrated deep reinforcement learning based collision avoidance algorithm is then proposed, which enables the control policy to learn from human demonstration data and thus improve the learning speed and performance. Finally, simulation and experimental results are provided to demonstrate the effectiveness of the proposed scheme.
Highlights
M ULTI-ROBOT coordination has already established its worth in both theory and practical applications over the past two decades
Considering that the collision avoidance algorithm can be deployed on each mobile robot independently, the total computational complexity can be described by O(N M ), where N is the number of mobile robots and M represents the function of the dimension of input states, output states and each fully connected layers: M = SainFa1Fa2Fa3Saout + (ScinFc1 + Saout)Fc2Fc3Scout
The performance of the proposed Voronoibased cooperative exploration strategy is tested with different numbers of robots
Summary
M ULTI-ROBOT coordination has already established its worth in both theory and practical applications over the past two decades. In contrast to artificial potential field based collision-free path planning methods [22]–[24], which require comprehensive knowledge about the environment (e.g., the sizes and positions of the obstacles) and the robotic platforms (e.g., the accurate mathematical model of the robots), deep reinforcement learning techniques have the potential to achieve safe navigation with less work area information in advance. Motivated by the consistent progress and technological advancements in applications of cooperative exploration, especially by the increasing need to develop novel coordination techniques for handling multiple robots, in this paper, we aim to design an efficient autonomous exploration strategy for a decentralized collaborative multi-robot team while avoiding suddenly observed obstacles. A Voronoibased exploration algorithm and deep reinforcement learning based collision avoidance approach are provided to coordinate the robots efficiently while avoiding sudden obstacles.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.