Abstract

In this paper, we propose methodologies to train highly accurate deep convolutional neural networks (CNNs) for video super resolution (SR). To use the inter-frame characteristic, we introduce a video SR network based on two-stage motion compensation (VSR-TMC). Firstly, the low resolution (LR) frames are aligned by the LR optical flow, and fed into a 3D-convolution network for spatial super resolution. This 3D-conv network generates the intermediate high resolution (HR) frames based on the aligned LR frames. The HR optical flow between the intermediate HR frames is further utilized to refine these HR frames as the final output. This HR optical flow could be estimated either from intermediate HR frames, or by a proposed super resolution network specifically working in the optical flow domain. Such optical flow SR network allows us to get the HR optical flow directly from the LR optical flow, which is more efficient compared to calculating HR optical flow from HR frames. Experimental results on publicly available dataset demonstrate that the VSR-TMC is significantly better compared to single image SR networks and video SR networks with LR motion compensation only. It achieves the state-of-the-art performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.