Abstract

Image matching is a primary technology to fuse the complementary information from optical and SAR images. Due to the high nonlinear radiometric and geometric relationship, the optical and SAR image matching task remains a widely unsolved challenge. In this study, we propose to use a Siamese convolutional neural network (CNN) architecture to learn pixelwise deep dense features. The proposed network is able to balance the learning of high-level semantic information and low-level fine-grained information, which is nonnegligible for feature matching task. Under the local searching framework, the loss function is defined based on the score map produced by the sum of squared differences (SSDs) between the learned pixelwise dense features of local optical and the SAR image patches, with a fast implementation in the frequency domain. The hardest negative mining strategy is adopted to increase the discrimination of the network. Extensive experiments are conducted on optical and SAR image pairs of different spatial resolution and different landcover types, verifying the superiority and robustness of the proposed method in terms of matching accuracy and matching precision.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.