Abstract

ABSTRACT Synthetic aperture radar (SAR) has all-weather and all-day observation capabilities and a certain ability to penetrate the surface; therefore, it has unique advantages in many aspects, which other remote sensing methods cannot achieve. However, owing to imaging principles, the interpretation of SAR images is complicated. Therefore, converting SAR images into optical remote sensing images is one method of SAR image interpretation. Therefore, this paper proposes an improved conditional generative adversarial network (cGAN) that includes a generator based on the encoder-decoder structure of the swin transformer feature extraction module. The generator uses a spatial pyramid structure and multi-scale depth feature extraction structure to extract the SAR image features at different scales. A multi-scale discriminator was used to discriminate images on different scales. The introduction of feature matching loss makes the discriminator’s feedback to the generator richer and uses perceptual loss based on VGGNet to evaluate the image quality in depth. A series of experiments verified the effectiveness of the method in this study and proved the potential of the transformer structure in the field of SAR image translation. The method presented in this paper has obvious advantages and can translate SAR images into optical remote sensing images.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.