Abstract

Video super-resolution converts low-resolution videos to sharp high-resolution ones. In order to make better use of temporal information in video super-resolution, we design inverse recurrent net and hybrid local fusion. We concatenate the original low-resolution input sequence and its inverse sequence repeatedly. The new sequence is viewed as a combination of different stages, and is processed sequentially by using orent net. The outputs of the last two stages in opposite directions are fused to generate the final images. Our inverse recurrent net can extract more bidirectional temporal information in the input sequence, without adding parameter to the corresponding unidirectional recurrent net. We also propose a hybrid local fusion method which uses parallel fusion and cascade fusion for incorporating sliding-window-based methods into our inverse recurrent net. Extensive experimental results demonstrate the effectiveness of the proposed inverse recurrent net and hybrid local fusion, in terms of visual quality and quantitative evaluations. The code will be released athttps://github.com/5ofwind.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call