Abstract

Recently, convolutional neural networks have made a remarkable performance for video super-resolution. However, how to exploit the spatial and temporal information of video efficiently and effectively remains challenging. In this work, we design a bidirectional temporal-recurrent propagation unit. The bidirectional temporal-recurrent propagation unit makes it possible to flow temporal information in an RNN-like manner from frame to frame, which avoids complex motion estimation modeling and motion compensation. To better fuse the information of the two temporal-recurrent propagation units, we use channel attention mechanisms. Additionally, we recommend a progressive up-sampling method instead of one-step up-sampling. We find that progressive up-sampling gets better experimental results than one-stage up-sampling. Extensive experiments show that our algorithm outperforms several recent state-of-the-art video super-resolution (VSR) methods with a smaller model size.

Highlights

  • Super-resolution (SR) is a class of image processing techniques that generates a high-resolution (HR) image or video from its corresponding low-resolution (LR) image or video

  • Most existing video super-resolution (VSR) methods [10,11,12,13,14] consist of similar steps: motion estimation and compensation, feature fusion and up-sampling

  • To alleviate the above issues, we propose an end to end bidirectional temporal-recurrent propagation network (BTRPN)

Read more

Summary

Introduction

Super-resolution (SR) is a class of image processing techniques that generates a high-resolution (HR) image or video from its corresponding low-resolution (LR) image or video. Most existing VSR methods [10,11,12,13,14] consist of similar steps: motion estimation and compensation, feature fusion and up-sampling. They usually use optical flow to estimate the motion between the reference frame and supporting frames, and align all other frames to the reference with warping operations. Inaccurate motion estimation and alignment may introduce artifacts around image structures in the aligned supporting frames It takes a lot of computational resources to compute the optical flow on every pixel between frames. We propose a novel end to end bidirectional temporal-recurrent propagation network, which avoids the complicated combination network of optical estimation and super-resolution. Compared to one-step up-sampling, progressive up-sampling means solving the SR optimization issue in a small solution space, which decreases the difficulty of learning and boosts the performance of reconstructed images

Single-Image Super-Resolution
Video Super-Resolution
Network Architecture
TRP Unit
Bidirectional Network
Attentional Mechanism
Progressive Up-Sampling
Datasets and Training Details
Depth and Channel Analysis
Bidirectional Model Analysis
Attention Mechanism
Quantitive and Qualitative Comparison
Parameters and Test Time Comparison
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.