Abstract

Video super-resolution reconstruction is the process of reconstructing low-resolution video frames into high-resolution video frames. Most of the current methods use motion estimation and motion compensation to extract temporal series information, but the inaccuracy of motion estimation will lead to the degradation of the quality of video super-resolution results. Additionally, when using convolution network to extract feature information, the number of feature information is limited by the number of feature channels, resulting in poor reconstruction results. In this paper, we propose a recurrent structure of regional focus network for video super-resolution, which can avoid the influence of inaccurate motion compensation on super-resolution results. Meanwhile, regional focus blocks in the network can focus on different areas of video frames, extract different features from shallow to deep layers, and skip-connect to the last layer of the network through feature aggregation to improve the richness of features participating in the reconstruction. The experimental results show that our method has higher computational efficiency and better video super-resolution results than other temporal modeling methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call