Abstract

Unmanned aerial vehicle (UAV) image stitching techniques based on position and attitude information have shown clear speed superiority over feature-based counterparts. However, how to improve stitching accuracy and robustness remains a great challenge since position and attitude parameters are sensitive to noise introduced by sensors and external environment. To mitigate this issue, this work presents a simple yet effective stitching algorithm for UAV images based on a coarse-to-fine strategy. Specifically, we first conduct coarse registration using the position and attitude information obtained from GPS, IMU, and altimeter. Then, we introduce a novel offline calibration phase that is designed to regress the obtained global transformation matrix to the optimal one computed from feature-based algorithms, by using multi-layer perceptron (MLP) neural networks for fast correction. Consequently, the proposed method well integrates the complementary strengths of both parameter and feature-based methods, achieving an ideal speed–accuracy tradeoff. Moreover, to facilitate research on this topic, we establish a new dataset, named UAV-AIRPAI, that comprises over 100 UAV image pairs with position and attitude annotations to the community, opening up a promising direction for UAV image stitching. Extensive experiments on the UAV-AIRPAI dataset show that our method achieves superior accuracy compared to priors while running at a real-time speed of 0.0124 s per image pair. Code and data will be available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/dededust/UAV-AIRPAI</uri> .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call