Abstract

13.7pt Many convolutional neural network (CNN)-based in-loop filters have been proposed to improve coding performance. However, considering the single perception scale, high parameter complexity, and the need to train multiple models for various quantization parameters (QPs), the performance and practicability of most existing methods are limited. Inspired by neuron diversity, this paper proposes a lightweight multiattention recursive residual CNN-based in-loop filter that can handle encoded frames with various QP values, frame types (FTs), and temporal layers (TLs) via a single model. First, multiscale features are learned in the neural network and fused with the proposed multidensity block (MDB) and multiscale fusion attention group (MFAG). Second, a recursive structure is adopted to improve the model depth while saving many parameters. The proposed auxiliary parameter fusion attention (APFA) and long-short-term skip connection (LSTSC) models integrate QPs, FTs and TLs into the model while accelerating training. Finally, we propose implementing LMA-RRCNN in parallel with the standard in-loop filter and select the optimal enhanced result in each patch. The experimental results on standard test sequences show that the proposed method achieves on average 13.70% and 11.87% BD rate savings under all-intra and random-access configurations, respectively, outperforming other state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call