Abstract

Challenging motion, which tends to cause artifacts, is a key problem in the video denoising task. Recent video denoising methods have attempted to address this problem. However, they usually provide general performance evaluation on the overall dataset and cannot provide a comprehensive analysis for the influence of different motion levels. Thus, we questioned whether these methods can effectively deal with different scene motions. To this end, we synthesize a dataset containing videos with different motion levels and capture a new dataset that consists of videos involving large-scale motion. Then, we provide a comprehensive analysis on the elaborately collected datasets and find that, as the motion level increases, the performance of the denoising models based on implicit motion estimation (IME) declines sharply, while explicit motion estimation (EME) contributes to a more robust denoising quality. Therefore, in this work, we present an EME-embedded progressive denoising framework that fully considers the relationship between the noise removal and motion estimation. Specifically, we decouple video denoising into spatial denoising, EME-based frame reconstruction, and temporal refining processes. Spatial denoising improves the accuracy of EME process in the case of videos suffering from heavy noise, while the temporal refining process refines the denoised frame by utilizing temporal redundancy of the reconstructed motion-free frames. Extensive experiments demonstrate that the proposed method outperforms existing state-of-the-art methods, especially for videos containing large-scale motion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call