Abstract

It has been shown that deep convolutional neural networks (CNNs) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts than the JPEG, including the flickering which is not well reduced by the CNN-based methods developed for still images. Moreover, recent video compression algorithms include in-loop filters which reduce the blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce the artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of the consecutive frames in videos. The similar patches are aggregated from the neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities for image and video compression standards, such as MPEG-2, AVC, and HEVC, with average PSNR gain of 1.27, 0.47, and 0.23 dB, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.