Abstract

Obtaining absolute pose based on pre-loaded satellite images is one of the important means of autonomous navigation for small Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System (GNSS) denied environments. Most of the previous works have tended to build Convolutional Neural Networks (CNNs) to extract features and then directly regress the pose, which will fail when solving the challenges caused by the huge viewpoint and size differences between “UAV-satellite” image pairs in real-world scenarios. Therefore, this paper proposes a probability distribution/regression integrated deep model with the attention-guided triple fusion mechanism, which estimates discrete distributions in pose space and three-dimensional vectors in translation space. In order to overcome the shortage of the relevant dataset, this paper simulates image datasets captured by UAVs with forward-facing cameras during target searching and autonomous attacking. The effectiveness, superiority, and robustness of the proposed method are verified by simulated datasets and flight tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call