Abstract

Owing to various factors, including severe speckle noise and orbit direction differences, performing multitemporal synthetic aperture radar (SAR) image registration with high accuracy and robustness may become difficult. Herein, an efficient self-supervised deep learning registration network for multitemporal SAR image registration, SAR-superpoint and transformation aggregation network (SSTA-Net), is proposed. The SSTA-Net consists of three parts: 1) the SAR-Superpoint detection network (SS-Net); 2) the transformation aggregation feature matching network (TA-Net); and 3) the unstable point removal module. Specifically, a pseudolabel generation method is adopted without additional annotations. It transfers the characteristics of real SAR data to synthetic data through a feature transition module, which can generate feature point labels for real SAR images for self-training SS-Net. Furthermore, a position–channel aggregation attention is proposed and embedded into the SS-Net to efficiently capture position and channel information and to increase the stability and accuracy of feature point identification. Finally, a unique transformation aggregation strategy is designed to improve the robustness of feature matching, and an unstable point removal module is adopted to eliminate the mismatched point pairs caused by orbit differences. Six sets of multitemporal SAR images were used to evaluate the registration performance of the SSTA-Net, and our model was also compared with the traditional and deep learning algorithms. The experimental results demonstrate that the SSTA-Net outperforms various state-of-the-art approaches for SAR image registration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call