Abstract
Recently, convolutional neural networks have shown superior performance in single-image superresolution. Although existing mean-square-error-based methods achieve high peak signal-to-noise ratio (PSNR), they tend to generate oversmooth results. Generative adversarial network (GAN)-based methods can provide high-resolution (HR) images with higher perceptual quality, but produce pseudotextures in images, which generally leads to lower PSNR. Besides, different regions in remote sensing images (RSIs) reflect discrepant surface topography and visual characteristics. This means a uniform reconstruction strategy may not be suitable for all targets in RSIs. To solve these problems, we propose a novel saliency-discriminated GAN for RSI superresolution. First, hierarchical weakly supervised saliency analysis is introduced to compute a saliency map, which is subsequently employed to distinguish the diverse demands of regions in the following generator and discriminator part. Different from previous GANs, the proposed residual dense saliency generator takes saliency maps as a supplementary condition in the generator. Simultaneously, combining the characteristic of RSIs, we design a new paired discriminator to enhance the perceptual quality, which measures the distance between generated images and HR images in salient areas and nonsalient areas, respectively. Comprehensive evaluations validate the superiority of the proposed model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.