Abstract

Image matching is a key prerequisite for image fusion. Currently, deep learning methods had shown great potential in matching. However, these methods mainly focus on optical images or area-based matching and their performance highly relies on the diversity of the training sets. This paper makes two major contributions to the synthetic aperture radar (SAR)-optical image feature matching task. First, we create a high-quality open SAR and optical patch dataset called SOPatch, which contains more than 650000 matching patch pairs. SOPatch is generated from a variety of satellite images (Sentinel-1, Sentinel-2, Gaofen-3, etc.) and contains a rich set of features (mountains, lakes, buildings, farmland, bare land, etc.), which can provide robustness to sensor and localization changes and train a model with good generalization ability. Similar to the HPatches (Balntas et al., 2017), we also add position and rotation noise to make the dataset more realistic and robust. Second, a local descriptor for SAR-optical matching called SODescNet is proposed. We first weaken the speckle noise of SAR by multiscale dilated convolution and channel attention mechanisms and then use DenseNet-CSP (cross stage partial) as the backbone for feature description. To reduce the difficulty of network training, we first use a Siamese network for pre-training and then fine-tune it via a pseudo-Siamese network. Extensive experiments show that our SODescNet is comparable or superior to state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call