Abstract

AbstractIn this paper, a novel deep learning dataset, called Air2Land, is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding scenarios. It bridges vision and control for ground‐based vision guidance systems having the multi‐modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms targeted at visually assisted landing of one fixed‐wing vehicle. The dataset is composed of sequential stereo images and synchronised sensor data, in terms of the flying vehicle pose and Pan‐Tilt Unit angles, simulated in various climate conditions and landing scenarios. Since real‐world automated landing data is very limited, the proposed dataset provides the necessary foundation for vision‐based tasks such as flying vehicle detection, key point localisation, pose estimation etc. Hereafter, in addition to providing plentiful and scene‐rich data, the developed dataset covers high‐risk scenarios that are hardly accessible in reality. The dataset is also open and available at https://github.com/micros‐uav/micros_air2land as well.The cover image is based on the Research Article Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land by Tianjiang Hu et al., https://doi.org/10.1049/csy2.12045.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call