Abstract

Aiming at the problem that the current video reconstruction can not take into account the high perception quality and high temporal correlation, a video reconstruction of generative adversarial network based on global perception is proposed. The generator adopts a recurrent and iterative network architecture. In order to make up for the lack of extraction of global information by the frame alignment network, the alignment network combines the global information extracted by the global information perception to perform frame alignment. Through the static temporal loss and the temporal statistics loss, combined with the relative discriminator of sequence frames input, the temporal correlation of the generated images sequence is improved. Experimental comparisons were performed on the Vid4 and REDS test sets, and the best image perception quality index (LPIPS/NIQE) were obtained, as well as better temporal correlation (tLPIPS), reaching 0.192/3.417/0.328 and 0.138/3.223/0.217. Experimental results show that the proposed method can effectively improve the perceptual quality of reconstructed video frames and has good temporal correlation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call