Abstract

Recent years deep convolutional neural networks(CNNs) have got great success in the single image superresolution(SISR). However, existing CNN-based SISR methods are hard to achieve ideal performance due to the limited information contained in a single low resolution (LR) image. Moreover, when the scale factor is large, SISR methods become difficult to learn and reconstruct unknown information, giving rise to poor performance. To address these issues, we propose a deep residual learning super-resolution framework MFSRResNet using multi-frame LR images as input. Our method MFSRResNet is based on the SRResNet architecture. The main modification is the number of input frames and number of convolutional layer feature maps. We use five-frame LR images as input rather than a single-frame LR image. We create multi-frame LR images by randomly downsampling a HR image and make sure sub-pixel shifts among them. The multi-frame input method increases the amount of information obtained at the input end, thus substantially improves the reconstruction results. Experiments show that MFSRResNet can well integrate the information between different LR images, and get better reconstruction results. MFSRResNet demonstrate the state-of-the-art performances on all benchmark datasets in terms of Peak signal-to-noise ratio (PSNR) and Structural similarity (SSIM). The significant performance improvement in PSNR/SSIM of MFSRResNet is 2.67dB/0.0495(×3),2.27dB/0.05498(×4) and 1.56dB/0.0504(×8) in average on two benchmark datasets Set5 and Set14 respectively compared with current state-of-the-art SISR methods RCAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call