Abstract

Synthetic aperture radar (SAR) images have been extensively used in earthquake monitoring, resource survey, agricultural forecasting, etc. However, it is a challenge to interpret SAR images with severe speckle noise and geometric deformation due to the nature of radar imaging. The translation of SAR-to-optical images provides new support for the interpretation of SAR images. Most of the existing translation networks, which are based on generative adversarial networks (GANs), are vulnerable to part information loss during the feature reasoning stage, making the outline of the translated images blurred and semantic information missing. Aiming to solve these problems, cross-fusion reasoning and wavelet decomposition GAN (CFRWD-GAN) is proposed to preserve structural details and enhance high-frequency band information. Specifically, the cross-fusion reasoning (CFR) structure is proposed to preserve high-resolution, detailed features and low-resolution semantic features in the whole process of feature reasoning. Moreover, the discrete wavelet decomposition (WD) method is adopted to handle the speckle noise in SAR images and achieve the translation of high-frequency components. Finally, the WD branch is integrated with the CFR branch through an adaptive parameter learning method to translate SAR images to optical ones. Extensive experiments conducted on two publicly available datasets, QXS-SAROPT and SEN1-2, demonstrate a better translation performance of the proposed CFRWD-GAN compared to five other state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call