Abstract

We proposed a motion-perceptive deformable alignment network that introduces a pre-computed optical flow to improve the motion perception of the deformable alignment process. The pre-computed flows share the burden of the learned offsets for motion estimation while the flexibility of the deformable convolutional network is maintained. In addition, a motion-adaptive pyramid structure in which the features are aligned in multi-scale levels and then merged based on the motion strength among input frames is proposed. With the above structures, an innovative space-time super-resolution (STSR) network is constructed with an improved motion compensation ability. STSR aims to restore a high-resolution and high-frame-rate sequence from its corresponding low-resolution and low-frame-rate version. The proposed STSR network is trained with a Vimeo-90K dataset, and tests are conducted on the Vimeo-90K, densely-annotated video segmentation (DAVIS), and realistic and diverse scenes (REDS) datasets. Performance is evaluated using the peak signal-to-noise ratio and structural similarity index of the entire restored frame on the Y channel. Extensive experiments demonstrate the superiority of the proposed network among both the one- and two-stage STSR methods, its improved alignment ability, and its significantly improved interpolated frame synthesis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.