Abstract

Entropy coding is a fundamental technology in video coding that removes statistical redundancy among syntax elements. In high efficiency video coding (HEVC), context-adaptive binary arithmetic coding (CABAC) is adopted as the primary entropy coding method. The CABAC consists of three steps: binarization, context modeling, and binary arithmetic coding. As the binarization processes and context models are both manually designed in CABAC, the probability of the syntax elements may not be estimated accurately, which restricts the coding efficiency of CABAC. To address the problem, we propose a convolutional neural network-based arithmetic coding (CNNAC) method and apply it to compress the syntax elements of the intra-predicted residues in HEVC. Instead of manually designing the binarization processes and context models, we propose directly estimating the probability distribution of the syntax elements with a convolutional neural network (CNN), as CNNs can adaptively build complex relationships between inputs and outputs by training with a lot of data. Then, the values of the syntax elements, together with their estimated probability distributions, are fed into a multi-level arithmetic codec to perform entropy coding. In this paper, we have utilized the CNNAC to code the syntax elements of the DC coefficient; the lowest frequency AC coefficient; the second, third, fourth, and fifth lowest frequency AC coefficients; and the position of the last non-zero coefficient in the HEVC intra-predicted residues. The experimental results show that our proposed method achieves up to 6.7% BD-rate reduction and an average of 4.7% BD-rate reduction compared to the HEVC anchor under all intra (AI) configuration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call