Abstract

The swift development of the multimedia technology has raised dramatically the users' expectation on the quality of experience. To obtain the ground-truth perceptual quality for model training, subjective assessment is necessary. Crowdsourcing platform provides us a convenient and feasible way to run large-scale experiments. However, the obtained perceptual quality labels are generally noisy. In this paper, we propose a probabilistic graphical annotation model to infer the underlying ground truth and discovering the annotator's behavior. In the proposed model, the ground truth quality label is considered following a categorical distribution rather than a unique number, i.e., different reliable opinions on the perceptual quality are allowed. In addition, different annotator's behaviors in crowdsourcing are modeled, which allows us to identify the possibility that the annotator makes noisy labels during the test. The proposed model has been tested on both simulated data and real-world data, where it always shows superior performance than the other state-of-the-art models in terms of accuracy and robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call