Abstract

Generative adversarial networks (GANs) have been widely used to single image superresolution (SR) (SISR), and these GAN-based methods have achieved significant performance on natural images due to their abilities to generate realistic textures. However, previous GAN-based methods are of poor performance when applied to remote sensing scene image SR. In order to further enhance the visual quality, in this article, we present a GAN-based SISR method by proposing a novel generator, which is capable of generating perceptually pleasing remote sensing scene images. First, we design the enhanced deep back-projection network (E-DBPN) generators based on the architecture of the original DBPN and mainly make two modifications. The first one is to add the proposed enhanced residual channel attention module (ERCAM) into the original DBPN, which can keep good properties of the original input features but also has the ability to emphasize more important features and suppressing less useful features. The other is to replace the concatenation operation with the proposed sequential feature fusion module (SFFM) for dealing with the feature maps generated by different up-projection units discriminatorily. As for the training process, the E-DBPN generator is first trained using the mean squared error (MSE) loss. Next, in order to improve the perceptual quality of the recovered images, we employ the content loss and the adversarial loss to train our initialized generator network. Experiments show that our method achieves state-of-the-art performance compared to other SISR methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call