Abstract

Image matching between the optical and synthetic aperture radar (SAR) is one of the most fundamental problems for earth observation. In recent years, many researchers have used hand-made descriptors with their expertise to find matches between optical and SAR images. However, due to the large nonlinear radiation difference between optical images and SAR images, the image matching becomes very difficult. To deal with the problems, the article proposes an efficient feature matching and position matching algorithm (MatchosNet) based on local deep feature descriptor. First, A new dataset is presented by collecting a large number of corresponding SAR images and optical images. Then a deep convolutional network with dense blocks and cross stage partial networks is designed to generate deep feature descriptors. Next, the hard L2 loss function and ARCpatch loss function are designed to improve matching effect. In addition, on the basis of feature matching, the two-dimensional (2-D) Gaussian function voting algorithm is designed to further match the position of optical images and SAR images of different sizes. Finally, a large number of quantitative experiments show that MatchosNet has a excellent matching effect in feature matching and position matching. The code will be released at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/LiaoYun0x0/Feature-Matching-and-Position-Matching-between-Optical-and-SAR</uri> .

Highlights

  • I N earth observations, optical and synthetic aperture radar (SAR) images can be compared and analyzed to obtain more valuable information by complementation

  • A new deep learning method - MatchosNet is designed to implement the feature matching between optical images and SAR images with size differences, and further implemented the position matching of SAR images on the optical images

  • A new data set is proposed by collecting a large number of corresponding SAR images and optical images

Read more

Summary

INTRODUCTION

I N earth observations, optical and synthetic aperture radar (SAR) images can be compared and analyzed to obtain more valuable information by complementation. Many handcrafted feature descriptors matching methods [15]–[20] have emerged , but due to nonlinear radiometric difference, it is very difficult to extract sufficient number of highly repetitive features from optical and SAR images [21], [22]. We work on solving the feature matching problems for optical and SAR images of different sizes, and based on this, the position matching is further implemented. A large number of experiments are conducted to demonstrate that MatchosNet has very excellent effects in processing the feature matching and position matching between the optical and SAR images. A two-dimensional Gaussian function voting algorithm is designed to achieve position matching of SAR images and optical images with different sizes.

RELATED WORK
The procedure of the proposed method
The Architecture of the proposed framework
Model Training and Loss
Position matching algorithm
Baseline
Contrast Feature Matching Tests of different methods
Position matching Tests of MatchosNet
Methods
Evaluation of Computational Complexity and Time Performance
Ablation Studies and Analysis
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call