Abstract

Ultra-high definition (UHD) is currently being deployed in video distribution pipelines for improved quality of experience. This new recommendation not only improves spatial and temporal resolution, but also the quality and accuracy of pixels using wide color gamut (WCG) and high dynamic range (HDR) technologies. Indeed, HDR video technology aims at conveying the full range of perceptible shadow and highlight details with enough different tonal levels to prevent loss of visual information, while WCG increases the amount of visible color that can be represented. However, these technologies require different standard digital pixel representations than the ones currently employed in high definition (HD) television. While the impact of increasing spatial and temporal resolution (i.e., increase of resolution and frame rate) on compression efficiency has already been assessed, it is not clear how new pixel representations would affect the compression performance. In this paper, we discuss the differences between HD color pixel representation and the newly standardized representations developed to meet the requirements introduced by WCG and HDR. Moreover we explain why existing HD pixel representations are sub-efficient for encoding HDR and WCG pixels. We also perform a statistical analysis of pixel distribution in real images to explain how pixel’s representation influences compression efficiency. Results show that by tailoring a pixel representation upon the range of luminance and color values accessible in content and displays, bandwidth savings can be achieved while increasing the quality of delivered content. We conclude this paper by discussing the shortcomings of the color pixel representation recommended for the UHD television standards and provide guidelines to create a more efficient one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call