Abstract

Previous works have shown that artefacts generated by lossy video compression can be reduced by content-adaptive filtering via overfitting Neural Network (NN) filters on test content, and signalling a compressed weight update. This approach is applied to post-processing filtering, and three main contributions are proposed. Firstly, a new set of learnable parameters, named multipliers, are incorporated into the NNs, and only those are overfitted. Secondly, a new training scheme is proposed for jointly training multiple NNs, and used to adapt the weights of an efficient NN architecture (originally designed for in-loop filters) to function as post-filters. Thirdly, the weight update is signalled via a newly-designed Supplemental Enhancement Information (SEI) message. The proposed post-filter saved about 5.01% (Y), 18.95% (Cb), 17.33% (Cr) Bjøntegaard Delta rate (BD-rate) on top of the Versatile Video Coding (VVC) Test Model (VTM) 11.0 NN-based Video Coding (NNVC) 1.0, Random Access (RA) Common Test Conditions (CTC). Additional experiments provide comparisons of overfitting different parameters and trade-offs between overfitting time and coding gains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call