Abstract

Deep convolutional networks provide very high quality super resolution images through a learning process by a nonlinear end-to-end mapping between low and high resolution images. Many of the state-of-the-art super resolution networks employ residual blocks in their network architectures, where in each residual block the high frequency residual signals are added to the feature maps input to the block. In this paper, a new residual block is proposed for the problem of image super resolution. The proposed residual block consists of three modules, namely, feature transformation module, nonlinear edge extraction module and feature fusion module. The feature transformation module produces high frequency residual signals and the nonlinear edge extraction module extracts the edges of the features input to the block. These generated high frequency features are then fused using the feature fusion module in order to produce a very rich set of high frequency residual features. The performance of the super resolution network using the proposed residual block is compared with that of the state-of-the-art light-weight super resolution schemes on four benchmark datasets. It is shown that the proposed super resolution scheme outperforms the state-of-the-art light-weight super resolution networks, when both the performance and number of parameters of the network are simultaneously taken into consideration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call