Abstract

Video Super resolution algorithms usually utilize the motion information of each pixel in consecutive frames to interpolate pixel values in a higher resolution and reconstruct frames of a higher resolution video from a lower resolution input video. Recently, architectures based on deep neural networks have gained popularity and can generate higher resolution videos with better visual quality. Also, deep neural networks make single image super resolution possible. In single image super resolution, a higher resolution image is constructed from a given input image by learning the transformation from lower resolution images to the higher resolution images. In this paper we propose to apply the single image super resolution algorithm to each frame in the original video and then use the resulting frames to estimate the movements of each pixel. Since the spatial resolution of the estimated motion of each pixel affects the visual quality of the higher resolution video, it is expected that the proposed approach can improve the visual quality of video super resolution. The proposed approach consistently results in good quality video reconstruction when tested on videos with diverse contents and different motion levels, which outperforms state of the art algorithms and offers competing visual performance on benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call