Abstract

Recently, convolutional neural network (CNN) based models have shown great potential in the task of single image super-resolution (SISR). However, many state-of-the-art SISR solutions are reproducing some tricks proven effective in other vision tasks, such as pursuing a deeper model. In this paper, we propose a new solution (named as Multi-Receptive-Field Network - MRFN), which outperforms existing SISR solutions in three different aspects. First, from receptive field: a novel multi-receptive-field (MRF) module is proposed to extract and fuse features in different receptive fields from local to global. Integrating these hierarchical features can generate better mappings on recovering high-fidelity details at different scales. Second, from network architectures: both dense skip connections and deep supervision are utilized to combine features from the current MRF module and preceding ones for training more representative features. Moreover, a deconvolution layer is embedded at the end of the network to avoid artificial priors induced by numerical data pre-processing (e.g., bicubic stretching), and speed up the restoration process. Finally, from error modeling: different from $L1$ and $L2$ loss functions, we proposed a novel two-parameter training loss called Weighted Huber loss function which can adaptively adjust the value of back-propagated derivative according to the residual value, thus fit the reconstruction error more effectively. Extensive qualitative and quantitative evaluation results on benchmark datasets demonstrate that our proposed MRFN can achieve more accurate recovering results than most state-of-the-art methods with significantly less complexity.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.