Abstract

Image matching is a primary technology for optical and synthetic aperture radar (SAR) image fusion but often shows limited performance due to the highly nonlinear differences between optical and SAR modalities. Recently, deep neural networks (DNNs) have been investigated to effectively extract nonlinear features for image matching tasks, where DNNs are trained based on the elaborated design of loss functions and a low loss value is often expected to obtain better image matching performance. In this letter, we first theoretically demonstrate that when the value of a state-of-the-art loss function decreases, the corresponding matching performance may not consistently improve due to the imbalanced effect of positive and negative samples. To tackle this issue, we proposed an improved loss function to train DNNs for image matching of SAR and optical images. We theoretically prove that the improved loss function ensures the improvement of the matching performance when the loss value decreases based on Taylor’s series expansion analysis. Experimental results on an open dataset with extensive optical and SAR image pairs show that 1) the proposed loss function is better than the original one in terms of image matching performance and 2) the combination of our loss function and existing multiscale convolutional gradient feature (MCGF)-based network provides better matching performance than other state-of-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call