Abstract

In remote sensing images (RSIs), the visual characteristics of different regions are versatile, which poses a considerable challenge to single image super-resolution (SISR). Most existing SISR methods for RSIs ignore the diverse reconstruction needs of different regions and thus face a serious contradiction between high perception quality and less spatial distortion. The mean square error (MSE) optimization-based methods produce results of unsatisfactory visual quality, while generative adversarial networks (GANs) can produce photo-realistic but severely distorted results caused by pseudotextures. In addition, increasingly deeper networks, although providing powerful feature representations, also face problems of overfitting and occupying too much storage space. In this article, we propose a new saliency-guided feedback GAN (SG-FBGAN) to address these problems. The proposed SG-FBGAN applies different reconstruction principles for areas with varying levels of saliency and uses feedback (FB) connections to improve the expressivity of the network while reducing parameters. First, we propose a saliency-guided FB generator with our carefully designed paired-feedback block (PFBB). The PFBB uses two branches, a salient and a nonsalient branch, to handle the FB information and generate powerful high-level representations for salient and nonsalient areas, respectively. Then, we measure the visual perception quality of salient areas, nonsalient areas, and the global image with a saliency-guided multidiscriminator, which can dramatically eliminate pseudotextures. Finally, we introduce a curriculum learning strategy to enable the proposed SG-FBGAN to handle complex degradation models. Comprehensive evaluations and ablation studies validate the effectiveness of our proposal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call