Abstract

Recently, convolutional neural networks (CNN) have achieved impressive breakthroughs in single image superresolution. In particular, an efficient nonlinear mapping by increasing the depth and width of the network can be learned between the low-resolution input image and the high-resolution target image. However, this will lead to a substantial increase in network parameters, requiring the massive amount of training data to prevent overfitting. Besides, most CNN-based methods ignore the full use of different levels of features and, therefore, achieve relatively low performance. In this letter, we propose a deep convolutional network named densely connected residual networks (DRNet). Our proposed DRNet can reach very deep and wide while requiring fewer parameters. The significant performance improvement of our model is mainly due to the integration of dense skip connection and residual learning. In this way, DRNet mitigates the problems of overfitting, vanishing gradient, and training instability during training very deep and wide networks. Moreover, it can improve the propagation and reuse of features by creating direct connections from the previous layers to the subsequent layers. We evaluate the proposed method using images from four benchmark datasets and set a new state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call