Abstract

Neural Network (NN)-based coding techniques are being developed for hybrid video coding schemes, such as the Versatile Video Coding (VVC) standard. In-loop filters and postprocessing filters are two types of coding tools that aim to improve the visual quality of the reconstructed content. These tools are usually trained on large video or image datasets with varying content, but they are rarely adaptive to different content types. This problem is addressed with the proposed content-adaptive Convolutional Neural Network (CNN) post-processing filter. The proposed approach is content-adaptive in two ways. Firstly, a relatively simple CNN is pre-trained on a general video dataset and then fine-tuned on the video to be coded. Since only the bias terms of the CNN are fine-tuned, the signalling overhead is reduced. Secondly, a scaling factor indicates the influence of the CNN post-processing filter on the final reconstruction. The CNN post-processing filter is evaluated on top of VVC Test Model (VTM) 11.0 with NN-based Video Coding (NNVC) 1.0 and, overall, it can save 2.37% (Y), 3.63% (U), 2.24% (V) Bjøntegaard Delta rate (BD-rate) in the Random Access (RA) configuration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call