Video coding standards use a prediction structure to arrange video frames and exploit temporal correlations. In this aspect, it is crucial to resolve complicated temporal dependencies among frames to improve coding efficiency because the coding of a preceding frame affects the rate-distortion (R-D) performance of the subsequent frames. Previous algorithms have attempted to address the problem using handcrafted features or analytical models even though natural videos display various temporal characteristics. In this paper, we propose a reinforcement learning (RL)-based decision algorithm to build the optimal hierarchical prediction structure under a random-access configuration (RA-HPS) in Versatile Video Coding (VVC). Our goal is to maximize coding efficiency by selecting a series of optimal group of pictures (GOP) structures for coding. Accordingly, we formulate an adaptive GOP selection algorithm with a binary tree to represent a policy. We generate an optimal binary tree to minimize the sum of the R-D costs among all plausible binary trees. A new RL policy representation is defined, and the optimal policy is obtained by a sequential update. The tree grows with a hierarchical state-action and a reward sequence in each node. For efficient learning, the proposed technique uses a deep Q-network architecture to capture the temporal correlation between frames, which helps learn the policy of the tree-based RL framework effectively. Experimental results demonstrate that the proposed technique achieves a significant Bjontegaard-Delta (BD)-rate reduction compared with state-of-the-art GOP size-selection algorithms.
Read full abstract