In this paper, we propose a fast decision scheme using a lightweight neural network (LNN) to avoid redundant block partitioning in versatile video coding (VVC). A more versatile block structure, named the multi-type tree (MTT) structure, which includes binary trees (BTs) and ternary trees (TTs), is adopted by VCC, in addition to the traditional quadtree structure. The MTT improved the coding efficiency compared with previous video coding standards. However, the new tree structures, mainly TT, significantly increased the complexity of the VVC encoder. Although widespread application of VVC has been inhibited, this problem has not yet been investigated thoroughly in the literature. In this study, we first determine the statistical characteristics of coded parameters that exhibit correlation with the TT and develop two useful types of features <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">—explicit VVC features</i> (EVFs) and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">derived VVC features</i> (DVFs) <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">—</i> to facilitate the intra coding of VVC. These features can be obtained efficiently during the intra prediction before the determination of the best block partitioning during rate-distortion optimization in VVC encoding. Our LNN model decides whether to terminate the nested TT block structures subsequent to a quadtree based on the features. The experimental results confirm that the proposed method substantially decreases the encoding complexity of VVC with a slight coding loss under the All Intra configuration. Our code, models, and dataset are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/foriamweak/MTTPartitioning_LNN</uri> .
Read full abstract