Abstract

Conventional Convolutional Neural Network (CNN) based video super-resolution (VSR) methods heavily depend on motion compensation. Pixels in input frames are warped according to flow-like information to eliminate inter-frame differences. These methods have to make a trade-off between the distraction caused by spatio-temporal inconsistency and the pixel-wise detail damage caused by compensation. In this paper, we propose a novel video super-resolution method with a dynamic filter network based compensation module and a residual network based SR module. Unlike traditional VSR techniques, our method does not warp input pixels, but performs motion compensation during feature extractions. The experimental results demonstrate that our method outperforms the state-of-the-art VSR algorithms by at least 1.08 dB in terms of PSNR, and recovers more details together with superior visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call