Abstract

Instead of streaming the entire OmniDirectional Videos (ODVs) that are often sampled at ultra high definition and high frame rate, a viewport adaptive streaming is preferred in practice. We usually stream the High-Quality (HQ) content within current viewport, while Low-Quality (LQ) elsewhere to save the network bandwidth consumption. Such scheme would lead to a quality refinement after user adapts his/her focus to a new viewport. In this paper, we thus model the perceptual impact of the quality variations (through adjusting the Quantization Stepsize (QS or q) and Spatial Resolution (SR or s)) with respect to the Refinement Duration (RD or τ) when performing the refinement from an arbitrary LQ scale to an arbitrary HQ one. A number of quality variations are studied to cover sufficient use cases in practice, resulting in a unified analytical model, as a product of separable exponential functions that measure the QS and SR induced perceptual impacts in terms of the RD, and a perceptual index measuring the subjective quality of corresponding viewport video after refinement. This model is first validated in a managed lab environment via independent subjective assessments by constraining user's navigation to avoid unexpected noise, where both Pearson Correlation Coefficient (PCC) and Spearman's Rank Correlation Coefficient (SRCC) are around 0.97. We then extend the validations in a real-life viewport-dependent streaming system, still yielding PCC and SRCC about 0.96 when comparing collected subjective scores with model predictions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call