Abstract

A heterogeneous multi-robot system consisting of Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs) have advantages over a single-robot system in efficiency and flexibility, enabling them to perform a larger range of tasks. To allow heterogeneous platforms to work together in GPS-denied scenarios, it is crucial to build a complete 3D map of the environment. In this letter, a novel method is presented to perform ground and aerial collaborative mapping leveraging visual and range data collected by cameras and 3D LiDAR sensors. In the proposed system, a visual-LiDAR ego-motion estimation module that considers point, line and planar constraints can provide robust odometry information. Thumbnail images representing obstacle outlines are generated and descriptors are extracted using a neural network to help perform data association between separate runs. Map segments and the robot poses are organized together and are updated during a pose graph optimization procedure. The proposed ground-aerial collaborative mapping approach is evaluated on both synthetic and real-world datasets comparing with other methods. Experiment results demonstrate that our method can achieve outstanding mapping results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call