Abstract

In multimedia quality assessment, observer ratings are typically averaged into mean opinion scores (MOS) to obtain a subjective ground truth for a set of stimuli. Valuable information about individual observer rating behaviour and inter-observer differences is lost that can be useful to improve subjective experiment procedures and quality of experience prediction models. In this paper, we present an inter-observer analysis framework that addresses quality assessment from a different angle, setting the focus on observer differences rather than stimuli differences. The framework consists of procedures for inter-observer analysis as well as the necessary considerations during preparation and post-processing. The aim of this paper is to raise awareness that consideration of MOS alone simplifies quality assessment too much in the context of multimedia quality assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call