Abstract

Unlike optical sensors, Synthetic Aperture Radar (SAR) sensors acquire images of the Earth’s surface with all-weather and all-time capabilities, which is vital in a situation such as a disaster assessment. However, SAR sensors do not offer as rich visual information as optical sensors. SAR-to-Optical image-to-image translation generates optical images from SAR images to benefit from what both imaging modalities have to offer. It also enables multi-sensor image analysis of the same scene for applications such as heterogeneous change detection. Various architectures of Generative Adversarial Networks (GANs) have achieved remarkable image-to-image translation results in different domains. Still, their performances in SAR-to-Optical image translation have not been analyzed in the remote sensing domain. This paper compares and analyses the state-of-the-art GAN-based translation methods with open-source implementations for SAR-to-Optical image translation. The results show that GAN-based SAR-to-Optical image translation methods achieve satisfactory results. However, their performances depend on the structural complexity of the observed scene and the spatial resolution of the data. We also introduce a new dataset with a higher resolution than the existing SAR-to-Optical image datasets and release implementations of GAN-based methods considered in this paper to support the reproducible research in remote sensing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.