Abstract

Video super-resolution (VSR) aims to generate high-resolution (HR) video by exploiting temporal consistency and contextual similarity of low-resolution (LR) video sequences. The key to improving the quality of VSR lies in accurate frame alignment and the feature fusion of adjacent frames. We propose a dual channel attention deep and shallow super-resolution network, which combines with HR optical flow compensation to construct an end-to-end VSR framework HOFADS-VSR (attention deep and shallow VSR network union HR optical flow compensation). HR optical flow calculated by spatiotemporal dependency of consecutive LR frames is used to compensate adjacent frames to implement accurate frame alignment. Deep and shallow channels with attention residual block restore small-scale detail features and large-scale contour features, respectively, and strengthen the rich features of global and local regions through weight adjustment. Extensive experiments have been performed to demonstrate the effectiveness and robustness of HOFADS-VSR. Comparative results on the Vid4, SPMC-12, and Harmonic-8 datasets show that our network not only achieves good performance on peak signal-to-noise ratio and structural similarity index but also the restored structure and texture have excellent fidelity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.