Abstract

Recently, deep-learning-based image super-resolution methods have made remarkable progress. However, most of these methods do not fully exploit the structural feature of the input image, as well as the intermediate features from the intermediate layers, which hinders the ability of detail recovery. To deal with this issue, we propose a gradient-guided and multi-scale feature network for image super-resolution (GFSR). Specifically, a dual-branch structure network is proposed, including the trunk branch and the gradient one, where the latter is used to extract the gradient feature map as structural prior to guide the image reconstruction process. Then, to absorb features from different layers, two effective multi-scale feature extraction modules, namely residual of residual inception block (RRIB) and residual of residual receptive field block (RRRFB), are proposed and embedded in different network layers. In our RRIB and RRRFB structures, an adaptive weighted residual feature fusion block (RFFB) is investigated to fuse the intermediate features to generate more beneficial representations, and an adaptive channel attention block (ACAB) is introduced to effectively explore the dependencies between channel features to further boost the feature representation capacity. Experimental results on several benchmark datasets demonstrate that our method achieves superior performance against state-of-the-art methods in terms of both subjective visual quality and objective quantitative metrics.

Highlights

  • We propose a gradient-guided and multi-scale feature network for image superresolution (GFSR), including a trunk branch and a gradient branch, and extensive experiments demonstrate that our GFSR outperforms state-of-the-art methods for comparison in terms of both visual quality and quantitative metrics

  • Zhang et al [15] introduced the channel attention mechanism into residual network (ResNet) and assigned different weights to each channel feature according to the different contribution of channel features to the super-resolution performance, which greatly enhanced the detailed information of the reconstructed image

  • We first present the overall framework of the network, and afterward describe the gradient branch, multi-scale convolution unit, residual feature fusion block, and the adaptive channel attention block used in detail

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Numerous specialists have proposed many effective and brilliant hand-crafted models to improve the super-resolution performance by superimposing a large number of modules to extract features from different levels, such as [6–13] These methods can boost the feature representation capability, they make the network increasingly complex, which prompts the problem of gradient exploding and vanishing. To solve the issues mentioned above, we propose a gradient-guided and multi-scale feature network for image super-resolution (GFSR), which fuses the extracted multi-scale intermediate features and treats gradient feature map as structural prior to guide the image super-resolution process to recover as many details as possible. It is worth noting that the proposed multi-scale convolution module is able to simultaneously extract coarse and fine features from the input LR image without increasing the network depth, so as to facilitate more effective learning of the complex mapping relationship between LR and HR counterparts [17–19].

Related Works
CNN-Based SR Models
Residual Network
Attention Mechanism
Gradient Feature
Proposed Network
Network Structure
Gradient Branch
Multi-Scale Convolution Unit
Residual Feature Fusion Block
Adaptive Channel Attention Block
Experimental and Analysis
Datasets and Metrics
Experimental Details
Ablation Experiment
Verification of the Effectiveness of Multi-Scale Feature Extraction Unit
Verification of the Effectiveness of Structure Prior
Verification of the Effectiveness of Adaptive Weight Residual Unit
Verification of the Effectiveness of the Remaining Modules
Selection of Related Hyperparameters
Comparison with State-of-the-Art Methods
Method
Analysis of the Number of Parameters of the Model
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call