Abstract

With the development of science and technology, robots have been widely used in smart cities. The traversability mapping of environment perception is the prerequisite for robots to perform tasks. To save the energy consumption of traversability mapping for unmanned ground vehicle (UGV), we fusion a wide range of aerial images and a small amount of ground images to provide vision for UGV. Current map fusion methods are usually constrained by homogeneous model of robotic systems and lack of diverse sensors. As a result, they cannot work well in heterogeneous collaborative robotic systems that consist of aerial and ground robots. In this paper, we use heterogeneous robot systems, including UGV and unmanned aerial vehicles (UAV) to build an occupancy grid map that can be used for navigation. To fuse sensor data of different types, we propose a Collaborative Map Fusion algorithm based on Multi-task Gaussian Process Classification (MTGPC) using heterogeneous robotic systems. Besides, probabilistic model is exploited in traversability mapping, so the active perception can be used to build the map efficiently. Our system is tested in real scenes and can achieve an accuracy of more than 70%. The map fusion using active perception is better than map fusion using random strategy in terms of speed and accuracy. To our knowledge, this is the first work that can build the occupancy grid map using sparse data points sampled from aerial images and ground lidar map.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call