Abstract
Obtaining absolute pose based on pre-loaded satellite images is one of the important means of autonomous navigation for small Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System (GNSS) denied environments. Most of the previous works have tended to build Convolutional Neural Networks (CNNs) to extract features and then directly regress the pose, which will fail when solving the challenges caused by the huge viewpoint and size differences between “UAV-satellite” image pairs in real-world scenarios. Therefore, this paper proposes a probability distribution/regression integrated deep model with the attention-guided triple fusion mechanism, which estimates discrete distributions in pose space and three-dimensional vectors in translation space. In order to overcome the shortage of the relevant dataset, this paper simulates image datasets captured by UAVs with forward-facing cameras during target searching and autonomous attacking. The effectiveness, superiority, and robustness of the proposed method are verified by simulated datasets and flight tests.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.