Abstract

Video super-resolution (VSR) aims to recover realistic high-resolution (HR) frame from its corresponding center low-resolution (LR) frame and some neighbouring supporting frames. To utilize the extra temporal information of supporting LR frames, most of VSR methods highly rely on accurate motion estimation and compensation models to align LR frames. However, the motions between frames have no ground truth, and it is difficult to train motion estimation and compensation models. Inaccurate results will lead to artifacts and blurs, which also will damage the recovery of high-resolution frames. We propose an effective separate 3D Convolution Neural Network (CNN) with wide activation to overcome the drawback of utilizing motion estimation and compensation models. Separate 3D convolution is factorizing the 3D convolution into 2D convolution along spatial domain and 1D convolution along temporal domain, which can not only capture temporal and spatial information simultaneously but also reduce the computation complexity compared to 3D CNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call