Abstract

UAV remote sensing image stitching can provide a more comprehensive and continuous area by stitching several overlapping region images. It has been widely used in many fields, such as airborne reconnaissance and surveillance tasks, agriculture production, geological disaster monitoring, and so on. However, traditional feature-based image stitching methods often require hand-crafted features and multiple execution steps, which could extract many redundant feature points in non-overlapping regions and may fail to stitch images with weak textures. To solve this problem, we propose an improved VGG16 Siamese network for the UAV remote sensing image stitching model to achieve end-to-end stitching. The first 13 layers of the VGG16 network were intercepted to form a shared weights feature extraction network, and an improved Squeeze-and-Excitation module was introduced to effectively extract the features of the overlapping areas of the images. A network of regressing affine matrix was designed by using the LeakyReLu activation function to protect the relevant feature maps after feature fine matching, which improves the accuracy of image stitching. Besides, we constructed datasets to train our network: a single UAV remote sensing image was used to form a training image pair based on a limited degree of affine transformation, and the real label of affine transformation between the image pairs was obtained. We compared the SIFT+RANSAC, APAP, and deep homography estimation-based image stitching algorithm on the UAV remote sensing datasets. Experiments show the value of structural similarity(SSIM) increased by 24.1%, and the root mean square error(RMSE) was reduced by 14.69%. Moreover, the proposed stitching model also has good stitching results for UAV remote sensing images as well as weak texture images in subjective visual effects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call