Abstract

In this paper, we propose an attention-guided dual spatial-temporal non-local network for video super-resolution (ADNLVSR). We integrate temporal and spatial non-local self-similar contexts from continuous video frames after motion compensation, and merge the features of different levels discriminatively with channel attention mechanism for target frame. During motion compensation, unlike previous methods directly stacking input images or features for merging, we use learnable attention mechanism to guide the merging, which suppresses undesired components caused by misalignment and enhances desirable fine details. During feature fusion, in contrast to most previous approaches where global-level non-local self-similarity existing in space or time is usually considered, we propose region-level spatial and temporal non-local operations for exploiting temporal correlations and enhancing similar spatial structures. The proposed modules can effectively avoid the computational burden caused by existing global-level non-local operations based on our analysis, and enhance correlated structure information. In addition, we propose a channel attention-guided residual dense block (CRDB), in which a second-order channel attention mechanism is applied to adaptively rescale the channel-wise features for more discriminative representations. Extensive experiments on different datasets demonstrate superior performance to state-of-the-art published methods on video super-resolution.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.