Abstract

Single image super-resolution (SISR) has been widely studied in recent years as a crucial technique for remote sensing applications. In this paper, a dense residual generative adversarial network (DRGAN)-based SISR method is proposed to promote the resolution of remote sensing images. Different from previous super-resolution (SR) approaches based on generative adversarial networks (GANs), the novelty of our method mainly lies in the following factors. First, we made a breakthrough in terms of network architecture to improve performance. We designed a dense residual network as the generative network in GAN, which can make full use of the hierarchical features from low-resolution (LR) images. We also introduced a contiguous memory mechanism into the network to take advantage of the dense residual block. Second, we modified the loss function and altered the model of the discriminative network according to the Wasserstein GAN with a gradient penalty (WGAN-GP) for stable training. Extensive experiments were performed using the NWPU-RESISC45 dataset, and the results demonstrated that the proposed method outperforms state-of-the-art methods in terms of both objective evaluation and subjective perspective.

Highlights

  • High-resolution (HR) images, which contain abundant, detailed information, are crucial for various remote sensing applications, such as target detection, surveillance [1], satellite imaging [2] and others

  • According to the theory of generative adversarial networks (GANs), there is a discriminative network (DN) in addition to the generative network (GN), which forms the adversarial networks: the GN produces the reconstructed image ISR, while the DN is used to distinguish between the ground-truth image IG and ISR

  • Where f (θGN(IL)) and f (IG) represent the feature maps of ISR and IG extracted by VGG; [ ∇zθDN(z) 2 − 1]2 is the gradient penalty according to Wasserstein GAN (WGAN)-GP; λ is the coefficient set to 10 based on several comparative experiments; and ∇z indicates the operation of partial derivatives for z, which can be formulated as z = β f (IG) + (1 − β) f (θGN(IL)), β ∼ uni f orm[0, 1]

Read more

Summary

Introduction

High-resolution (HR) images, which contain abundant, detailed information, are crucial for various remote sensing applications, such as target detection, surveillance [1], satellite imaging [2] and others. In [27], He et al designed a novel, deep–shallow cascade-based CNN method, which can effectively recover the high-frequency information of remote sensing images. Ledig et al designed a GAN for image super-resolution (SRGAN) [34] He separately employed a deep residual network proposed by He et al [35] with skip-connection as the generative network (GN) and designed a classification network as the discriminative network (DN). Ma et al [36] proposed a novel method on SR task named transferred generative adversarial network (TGAN), which can enhance the feature representation ability of the model and solve the problem of poor quality and insufficient quantity of remote sensing images. To address the above drawbacks, we propose a dense residual generative adversarial network (DRGAN) for the remote sensing images SR task.

Related Work
GAN-Based SR
Feature Extraction
Image Reconstruction
Structure of the DN
Loss Function
Dataset
Training Details
Quantitative Evaluation Factors
Results
Robustness of the Model
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call