Abstract

ABSTRACTMultitemporal snow remote sensing image matching is an important data processing step for snow monitoring and environmental change analysis. The extensive snow coverage in snow remote sensing images weakens local feature saliency, resulting in significant feature differences between two images and making it difficult to obtain consistency in local features. This poses a great challenge to image matching tasks. To address this issue, we propose a multitemporal snow remote sensing image matching method that considers global and contextual features. This method can extract consistent features between two images and perform matching tasks even in cases of extensive snow coverage. Specifically, our method enhances the ability to aggregate global information by extracting global positional information and contextual features of the images at different scales and convolutional fields, obtaining robust matching descriptors with nonlocal information. We design corresponding loss functions, incorporating average precision loss before extracting contextual features, and combining it with description loss and keypoints detection loss for training. Extensive experiments demonstrate that our method achieves good results in the task of multitemporal snow remote sensing image matching, which improves the match precision and Recall by 11.5% and 7.5%, respectively, compared with the next best results in the experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call