Abstract
The new video coding standard, known as versatile video coding (VVC) is projected to be concluded by the end of 2020. This standard is conducted mainly to address 8k videos and emerging applications such as 360 deg and high dynamic range. Intraprediction is a part of the prediction step in the video coding that exploits spatial redundancy. This module has been improved, compared to the high-efficiency video coding (HEVC), by increasing the set of angular intraprediction modes (IPM) from 33 to 65 to model directional textures more accurately. Moreover, a quadtree plus binary tree (QTBT) structure replaced the QT of the HEVC. These improvements targeting at enhancing the coding efficiency resulted in significant coding complexity, especially in terms of encoding time. This paper fits into this context. It evokes the optimizations of the intramode and coding unit size decisions using statistical methods of fast decision and deep learning. A fast intramode decision algorithm is proposed for the different binary depths of the QTBT structure. Thus, an optimization by deep learning for square blocks is also included. Results show that the combinations of these two approaches can significantly reduce the complexity of the VVC encoder. Under the all intra (AI) configuration, a reduction of about 61.04% of the intraencoding time is achieved while maintaining an acceptable rate-distortion performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.