Abstract

In visual perception, human only perceive discrete-scale quality levels over a wide range of coding bitrate. More clearly, the videos compressed with a series of quantization parameters (QPs) only have limited perceived quality levels. In this paper, perceptual quantization is transformed into the problem of how to determine the just perceived QP for each quality level, and a just noticeable coding distortion (JNCD) based perceptual quantization scheme is proposed. Specifically, multiple visual masking effects are analyzed and a linear regression (LR) based JNCD model is proposed to predict JNCD thresholds for all quality levels at first. According to the JNCD prediction model, the frame-level perceptual QPs for all quality levels are then derived on the premise of that coding distortions are infinitely close to the predicted JNCD thresholds. Based on the predicted frame-level perceptual QPs, the perceived QPs of all quality levels for each coding unit (CU) are finally determined according to a perceptual modulation function. Experimental results show that the proposed quality-wise perceptual quantization scheme is superior to the existing perceptual video coding algorithms significantly, i.e., the proposed perceptual quantization could save more bitrate with better quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call