Abstract
This article proposes a video encoding method at the residual quad-tree (RQT) level of high-efficiency video coding (HEVC) for JND model-based perceptual coding. This method performs perceptual encoding considering the luminance adaptation characteristics of the human visual system at the RQT level, which determines the partitioning information of the transform unit after determining the motion vectors through motion estimation (a highly complex module in an encoder). In each RQT stage, the proposed algorithm determines the maximum quantization parameter (QP) that reduces the bitrate and simultaneously maintains similar subjective quality, i.e., ensuring no visual quality degradation based on the JND model compared to the initial QP value. To evaluate the performance of the proposed method, a JND model-based RQT-level multi-loop encoding method is applied to the HM16.0 HEVC reference software. Experimental results with a random access configuration of the common test condition show that the bitrate is reduced, on average, by 5.78% and 10% for Class A and Class B videos, respectively; the maximum reduction is 25.3% with the almost same visual quality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.