Due to the intensity differences and speckle noise, automatic optical-synthetic aperture radar (SAR) image matching is still a challenging task. This letter addresses this problem by proposing a novel descriptor (MaskMIND) with three different modes using modality independent neighborhood information. This descriptor aims to sample and active relative structural information to improve accuracy and precision. In addition, the gradient maps are calculated respectively in pretreatment to eliminate noise. Then the corresponding metric, which takes into account the increasing positional uncertainty with distance, is defined using the sum of squared differences (SSD) accelerated by fast Fourier transform (FFT). Our methods are effective because of its relativeness and abstractness. The experimental results in five optical-SAR image pairs show that our methods have great performance and potentialities. Compared with CFOG, which is the state-of-the-art method, the accuracy of our sMaskMIND-grids is improved by 12% on average.
Read full abstract