Abstract

BackgroundThe recurrent recovery is one of the common methods for video super-resolution, which models the correlation between frames via hidden states. However, when we apply the structure to real-world scenarios, it leads to unsatisfactory artifacts. We found that, in the real-world video super-resolution training, the use of unknown and complex degradation can better simulate the degradation process of the real world. MethodsBased on this, we propose the RealFuVSR model, which simulates the real-world degradation and mitigates the artifacts caused by the video super-resolution. Specifically, we propose a multi-scale feature extraction module(MSF) which extracts and fuses features from multiple scales, it facilitates the elimination of hidden state artifacts. In order to improve the accuracy of hidden states alignment information, RealFuVSR use advanced optical flow-guided deformable convolution. Besides, cascaded residual upsampling module is used to eliminate the noise caused by the upsampling process. ResultsThe experiment demonstrates that our RealFuVSR model can not only recover the high-quality video but also outperform the state-of-the-art RealBasicVSR and RealESRGAN models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call