Abstract

Due to the disparity in imaging techniques, significant radiometric and geometric variances exist among optical and Synthetic Aperture Radar (SAR) images, making it a challenging task for achieving automatic and accurate matching in contemporary international academic research. Handcrafted structural features have shown some success in heterogeneous image matching in recent years. However, improving its matching performance manually proves to be difficult. As a result, this work presents a matching strategy based on attention-enhanced structural feature representation to improve optical and SAR images matching accuracy. In this research, a novel multi-branch global attention module is built using handmade structural feature extraction. This module can focus on the common information of structural feature descriptors in space and channel, extracting finer and more robust image features. Then, the proposed method utilizes the sum of squared difference (SSD) learning metric, which is based on the fast Fourier transform, to develop a loss function. This loss function is then used to train positive and negative samples in order to enhance the discriminative ability of the model. Experimental results obtained from training and testing on numerous optical and SAR datasets demonstrate that the proposed method significantly improves the accuracy of matching optical and SAR images compared to both current structural feature matching methods and advanced deep learning matching models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call