Abstract

Unmanned aerial vehicles (UAVs) are widely used in many industries. The use of UAV images for surveying requires that the images contain high-precision localization information. However, the accuracy of UAV localization can be compromised in complex GNSS environments. To address this challenge, this study proposed a scheme to improve the localization accuracy of UAV sequences. The combination of traditional and deep learning methods can achieve rapid improvement of UAV image localization accuracy. Initially, individual UAV images with high similarity were selected using an image retrieval and localization method based on cosine similarity. Further, based on the relationships among UAV sequence images, short strip sequence images were selected to facilitate approximate location retrieval. Subsequently, a deep learning image registration network, combining SuperPoint and SuperGlue, was employed for high-precision feature point extraction and matching. The RANSAC algorithm was applied to eliminate mismatched points. In this way, the localization accuracy of UAV images was improved. Experimental results demonstrate that the mean errors of this approach were all within 2 pixels. Specifically, when using a satellite reference image with a resolution of 0.30 m/pixel, the mean error of the UAV ground localization method reduced to 0.356 m.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call