Abstract
In recent years, the application of deep convolution neural network has achieved great success in image super-resolution (SR) fields. However, most of these deep convolution neural network methods are based on increasing network depth, width or input image size to achieve more accurate fitting, which will lead to a surplus of parameters and slow the convergence speed. Compared to target recognition tasks, the neural network for image super resolution often remove pooling layers to prevent losing some learned information. In this paper, we propose a dual-link residual network with pooling and deconvolution layers(RPDN). By adding pooling layers and corresponding deconvolution layers, the model can extract more non-adjacent pixel structure information, reduce the computation, and increase the sparsity of the model. Inspired by DenseNet, we propose a dual-link structure to achieve feature fusion of shallow and deep layers. One chain is to connect the convolution layers, and another chain is to connect the pooling layers. RPDN also uses Intergroup Connection(IC) to help gradient back propagation. Experiments on benchmark datasets shows our network can effectively reduce artifacts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.