Abstract

Several video quality metrics (VQMs) have been proposed in many publications to predict how humans perceive video quality. It is common to observe significant disagreements amongst the quality predictions of these VQMs for the same video sequence. Following an extensive literature search, we found no publicised work that has investigated if such disagreements convey useful information on the accuracy of VQMs. Herein, a measure for quantifying the disagreement between VQMs is proposed. A small-scale subjective study is carried out to assess the effectiveness of our proposal. In particular, the proposed disagreement measure is shown to be extremely effective in determining whether the quality of any given processed video sequence (PVS) can be accurately predicted by the VQMs. This type of information is particularly useful for identifying video sequences that are likely to degrade the end-user’s quality of experience (QoE). Our proposal is also useful in selecting the most effective PVSs to be employed in a subjective test. We show that the proposed disagreement measure can be effectively predicted from bitstream features. This establishes a link between the capability to accurately assess the quality of a PVS and the way it is encoded. In addition, an analysis is conducted to compare the performances of some well-known and widely used open-source metrics and two proprietary metrics. The two proprietary metrics are used by a large media company for enhancing its delivery pipeline. The outcome of this comparison highlights the suitability of the open-source VQM, Video Multi-method Assessment Fusion (VMAF), as a good benchmark quality measure for both the industrial and academic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call