Abstract

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.

Highlights

  • IntroductionSingle image super-resolution (SISR) reconstruction is the development of high-resolution (HR)

  • Single image super-resolution (SISR) reconstruction is the development of high-resolution (HR)images with richer details and clearer texture from low-resolution (LR) images or degraded images and has wide applications in the field of image processing [1,2,3,4,5]

  • ISR and IHR have their own characteristics as two-dimensional discrete signals, so this paper proposes a frequency energy similarity loss function

Read more

Summary

Introduction

Single image super-resolution (SISR) reconstruction is the development of high-resolution (HR). We propose a GAN based on depthwise separable convolution dense blocks, which can improve computational efficiency by greatly reducing the number of parameters. For the better restoration of image edges and improvement of the quality of super-resolution images, we introduce a new loss function for generative networks: the frequency energy similarity loss function. A depthwise separable convolution dense block for learning the feature representation of the image; The frequency energy similarity loss function was proposed. This new loss function is more stable and effective than previous approaches in the experiments we conducted

Generative Adversarial Networks
Depthwise Separable Convolution
DSCSRGAN Architecture
Depthwise Separable Convolution Dense Block
Frequency Energy Similarity Loss Function
Data and Similarity Measures
Performance of the Final Networks
Findings
Discussion and Future
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.