Abstract

Autonomous navigation and landing of Unmanned aerial vehicles (UAVs) are critical functions towards full intelligence in unknown environments. However, current methods heavily rely on positioning devices or geometric maps, leading to two limitations: (1) UAVs fail to work normally indoors when signals are blocked, and (2) Navigation systems are not capable of achieving high-level decision-making missions. In this paper, we propose a novel and complete framework to realize the autonomous landing of UAVs in unknown indoor scenes based on visual SLAM, semantic segmentation, terrain estimation, and a decision-making model. Firstly, our 3D map is built on top of the visual SLAM system with semantic features, and then multiple types of maps (e.g., texture map, octo-map, grid-map, and semantic topology) are created for different function requirements. To achieve high-level scene understanding, we design a data association rule to fuse semantic features extracted by a deep learning model into the mapping process of topology structure. Next, a terrain estimation strategy is performed in the lightweight grid-map, which is modeled only via low-level elevation representation. Through multiple terrain constraints factor, a few optimized landing sites are selected in safe regions via clustering algorithm. Finally, based on the guidance of semantic topology, we construct a decision-making model for UAV landing that effectively integrates terrain safety, environment perception, and path planning. We perform extensive autonomous landing experiments on multiple indoor scenes from the TUM RGB-D dataset, and the full unknown environment maps are created simultaneously. The experiment proves that the proposed framework can select an optimal landing site for the advanced mission of UAVs effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call