Prediction in HEVC exploits the redundant information in the fame to improve compression efficiency. The computational complexity of prediction is comparatively high as it recursively calculates the depth by comparing the rate-distortion optimization cost (RDO) exhaustively. The deep learning technology has shown a good mark in this area compared to traditional signal processing because of its content-based analysis and learning ability. This paper proposes a deep depth decision algorithm to predict the depth of the coding tree unit (CTU) and store it as a 16-element vector, and this model is pipelined to the HEVC encoder to compare the time taken and bit rate of encoding. The comparison chart clearly shows the reduction in computational time and enhancement in bitrate while encoding. The dataset used here is generated for the model with 110000 frames of the various resolutions, split into test, training, and validation, and trained on a depth decision model. The trained model interfaced with the HEVC encoder is compared with the normal encoder. The evaluation is done for quality check for the proposed model with BD-PSNR and BD-Bitrate shows a dip of 0.6 in BD-PSNR and increment of 6.7 in BD-Bitrate. When pipelined with the original HEVC, the RDO cost shows an improvement over existing techniques. The average encoding time is reduced by about 72% by the pipelined deep depth decision algorithm that points to the reduction in computational complexity. An average time saving of 88.49% is achieved with a deep depth decision algorithm-based encoder compared to the existing techniques.
Read full abstract