Abstract

To improve coding efficiency by exploiting the local inter-component redundancy between the luma and chroma components, the cross-component linear model (CCLM) is included in the versatile video coding (VVC) standard. In the CCLM mode, linear model parameters are derived from the neighboring luma and chroma samples of the current block. Furthermore, chroma samples are predicted by the reconstructed samples in the collocated luma block with the derived parameters. However, as the CCLM design in the VVC test model (VTM)-6.0 has many conditional branches in its processes to use only available neighboring samples, the CCLM implementation in parallel processing is limited. To address this implementation issue, this paper proposes including the neighboring sample generation as the first process of the CCLM, so as to simplify the succeeding CCLM processes. As unavailable neighboring samples are replaced with the adjacent available samples by the proposed CCLM, the neighboring sample availability checks can be removed. This results in simplified downsampling filter shapes for the luma sample. Therefore, the proposed CCLM can be efficiently implemented by employing parallel processing in both hardware and software implementations, owing to the removal of the neighboring sample availability checks and the simplification of the luma downsampling filters. The experimental results demonstrate that the proposed CCLM reduces the decoding runtime complexity of the CCLM mode, with negligible impact on the Bjøntegaard delta (BD)-rate.

Highlights

  • After decades of development of international video coding standards by the collaborative efforts of the ITU-T WP3/16 Video Coding Experts Group (VCEG) and the ISO/IEC JTC 1/SC 29/WG 11 MovingPicture Experts Group (MPEG), video coding technology has achieved tremendous progress in its compression capability

  • The proposed cross-component linear model (CCLM) can be efficiently implemented by employing parallel processing in both hardware and software implementations, owing to the removal of the neighboring sample availability checks and the simplification of the luma downsampling filters

  • It can be noted that the proposed CCLM can be efficiently implemented by employing parallel processing under negligible Bjøntegaard delta (BD)-rate impacts compared to VVC test model (VTM)-6.0

Read more

Summary

Introduction

After decades of development of international video coding standards by the collaborative efforts of the ITU-T WP3/16 Video Coding Experts Group (VCEG) and the ISO/IEC JTC 1/SC 29/WG 11 MovingPicture Experts Group (MPEG), video coding technology has achieved tremendous progress in its compression capability. As the recent demand for high resolution and high quality video contents by end-users has increased in application areas such as video streaming, video conferencing, remote screen sharing, and cloud gaming, the industry needs more efficient video coding technology to further reduce the network bandwidth or storage requirements of video content. To respond to such needs from the industry, the state-of-the-art VVC standard [1] was developed by the Joint Video Experts Team (JVET) of the VCEG and the MPEG, and finalized in July 2020. To improve the coding efficiency of intra prediction, VVC includes many outstanding intra prediction coding tools, such as 67 intra prediction modes, wide angle prediction (WAP), intra prediction with multiple reference lines (MRL), intra sub-partitions (ISP), matrix-based intra prediction (MIP), CCLM intra prediction, and position-dependent prediction combination (PDPC) [9]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call