Abstract

Video restoration and enhancement tasks, including video super-resolution(VSR), are designed to convert low-quality videos into high-quality videos to improve the audience’s visual experience. In recent years, many deep learning methods using optical flow estimation or deformable convolution have been applied to video super-resolution. However, we find that motion estimation based on a single optical flow is difficult to capture enough inter-frame information, and the method using deformable convolution lacks clear motion constraints, which affects its ability to process fast motion. Therefore, we propose a multi-offset-flow-based network (MOFN) to make more effective use of inter-frame information by using optical flow with offset diversity. We proposed an alignment and compensation module that can estimate the optical flow with multiple offsets for neighbouring frames and perform frame alignment. The aligned video frames will be fed into the fusion module, and high-quality video frames will be obtained after fusion and reconstruction. Extensive results show that our proposed model has a good ability to process motion. On several benchmark datasets, our method has achieved favorable performance compared with the most advanced methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.