Abstract

Abstract. Registration for multi-sensor or multi-modal image pairs with a large degree of distortions is a fundamental task for many remote sensing applications. To achieve accurate and low-cost remote sensing image registration, we propose a multiscale unsupervised network (MU-Net). Without costly ground truth labels, MU-Net directly learns the end-to-end mapping from the image pairs to their transformation parameters. MU-Net performs a coarse-to-fine registration pipeline by stacking several deep neural network models on multiple scales, which prevents the backpropagation being falling into a local extremum and resists significant image distortions. In addition, a novel loss function paradigm is designed based on structural similarity, which makes MU-Net suitable for various types of multi-modal images. MU-Net is compared with traditional feature-based and area-based methods, as well as supervised and other unsupervised learning methods on the Optical-Optical, Optical-Infrared, Optical-SAR and Optical-Map datasets. Experimental results show that MU-Net achieves more robust and accurate registration performance between these image pairs with geometric and radiometric distortions.We share the datasets and the code implemented by Pytorch at https://github.com/yeyuanxin110/MU-Net.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.