Abstract

Versatile Video Coding (H.266/VVC) standard achieves up to 30% bit-rate reduction while keeping the same quality compared with H.265/HEVC. To eliminate various coding artifacts like blocking, blurring, ringing, and contouring effects, etc., three in-loop filters have been incorporated in H.266/VVC. Recently, convolutional neural network (CNN) has attracted tremendous attention and achieved great success in many image processing tasks. In this paper, we focus on CNN-based filtering in video coding, where a single model solution for post-loop filtering is designed to replace the current in-loop filters. An architecture is proposed to reduce the artifacts of video intra frames, which take advantage of useful information such as partitioning modes and quantization parameters (QP). Different from existing CNN-based approaches, which generally need to train different models for different QP and only suitable for luma component, the proposed filter can well adapt to different QP, i.e. various levels of degradation of frames, and all components (i.e., luma and chroma) are jointly processed. Experiment results show that the proposed CNN post-loop filter not only can replace the de-blocking filter (DBF), sample adaptive offset (SAO) and adaptive loop filter (ALF) in H.266/VVC, and also outperforms them, leading to 6.46%, 10.40%, 12.79% BD-rate savings for Y, Cb and Cr, respectively, under all intra configuration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call