ABSTRACT Matching optical and synthetic aperture radar (SAR) images is often challenged by intricate geometric distortion and nonlinear radiation differences, leading to insufficient and unevenly distributed corresponding points. To tackle this issue, we propose a lightweight deep convolutional network with inverted residuals for optical and SAR image matching. Initially, a fully convolutional neural network (FCNN) is designed to extract high-level and semantic features, robustly capturing universal characteristics between optical and SAR images, adept at handling geometric distortion and nonlinear radiation changes. Notably, we integrate a lightweight architecture with inverted residuals into FCNN to adeptly extract local and global contextual information, facilitating feature reuse and minimizing the loss of crucial features. Additionally, a vector-refined module is deployed to refine dense features, filtering out redundant information. Subsequently, a coarse-to-fine strategy is employed to further eliminate gross errors or incorrect matches. Finally, we evaluate the performance of the proposed network in optical and SAR image matching against manually-designed methods and state-of-the-art deep learning techniques. Experimental results demonstrate that our network significantly surpasses existing methods in terms of the number of correct matches and matching accuracy. Specifically, our proposed network achieves at least a 2.8 times increase in correct matches and an 18% improvement in matching accuracy compared to existing methods.
Read full abstract