Abstract
Due to its highly complementary information about remote sensing, synthetic aperture radar (SAR) and optical imagery matching have drawn much attention in recent years. Compared with traditional methods, deep learning-based SAR-optical image matching models largely rely on supervision with ground truths, where the matching accuracy suffers because of unseen image domains. To mitigate loads in burdensome labeling tasks, transferring deep learning models trained with annotated source domains to nonannotated target domains has attracted great concern. Due to the domain gap, the difference between the source and target domains is likely to deteriorate the matching accuracy on target data if the training process is directly conducted without proper domain adaptation (DA). In this research, a Siamese DA (SDA) approach with a combined loss function is developed in the context of multimodality image matching. Then, a novel rotation/scale-invariant transformation module with regression modules is designed to extract rotation/scale-equivariant features. Finally, the causal inference-based self-learning method and the multiresolution histogram matching approach are employed to enhance the unsupervised matching performance. Experimental results on the RadarSat/Planet dataset and the Sentinel-1/2 dataset demonstrate that the developed model can achieve competitive matching performance with a low overlap ratio between domains and little data labeling. By alleviating the domain discrepancy, the developed model drastically reduces the average L2 score of the unsupervised matching from 9.576 to 0.658, while the less-than-one-pixel matching error rate is enhanced from 66.3% to 90.6%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Geoscience and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.