Abstract

Recently, convolutional neural networks (CNN) have achieved impressive breakthroughs in single image superresolution. In particular, an efficient nonlinear mapping by increasing the depth and width of the network can be learned between the low-resolution input image and the high-resolution target image. However, this will lead to a substantial increase in network parameters, requiring the massive amount of training data to prevent overfitting. Besides, most CNN-based methods ignore the full use of different levels of features and, therefore, achieve relatively low performance. In this letter, we propose a deep convolutional network named densely connected residual networks (DRNet). Our proposed DRNet can reach very deep and wide while requiring fewer parameters. The significant performance improvement of our model is mainly due to the integration of dense skip connection and residual learning. In this way, DRNet mitigates the problems of overfitting, vanishing gradient, and training instability during training very deep and wide networks. Moreover, it can improve the propagation and reuse of features by creating direct connections from the previous layers to the subsequent layers. We evaluate the proposed method using images from four benchmark datasets and set a new state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.