Abstract

Video super-resolution methods always suffer from being computationally expensive and time-consuming because of sequentially performing the temporal frame alignment and spatial feature fusion. To address the issue, a novel video super-resolution method based on a hierarchical recurrent multireceptive-field integration network is proposed. Specifically, a hierarchical recurrent structure, which can fully model the temporal correlation of sequential low-resolution frames, is proposed. Then, a residual multireceptive-field integration module is introduced to obtain the hidden state and rich context features. Finally, an adaptive fusion strategy is employed to further boost video-reconstruction quality using fewer parameters. Moreover, the proposed method is capable of improving the performance and speed of video super-resolution. Extensive experiments on benchmark datasets demonstrate that the proposed method achieves a better balance between the reconstruction performance and speed than existing video super-resolution methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.