Abstract

Video Super-Resolution(SR) is the method of reconstructing high resolution (HR) image frames by increasing the spatial resolution of low resolution (LR) image counterparts. SR has tremendous applications in the fields of satellite imaging, face recognition, defense, medical imaging and restoration etc. In this paper, we propose a technique of motion compensation between consecutive frames in LR video and pass as an input to our deep convolution neural network(CNN) model of 25 weight layers. The model is trained on both space and time dimension of LR-HR database. Consecutive HR frames are reconstructed by adding the sub-pixel motion vectors to super-resolved HR frame. We observed that over 97% of computation time is spend on convolution, so the convolution filter operation is parallelized in CNN model by using novel GPU optimization methods and achieved a speed up factor of 1000X over CPU. On using the gradient clipping technique of [1], the convergence rate of training model is boosted. We train the model using multi-scale LR-HR frames thereby achieving multi up-scale factor. We justify our proposed method by comparing our experimental results with current SR algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call