Abstract

Quality of experience (QoE) models predict the subjective quality of multimedia based on the relevant quality of service (QoS) factors. Due to the large space of QoS factors and the high costs of conducting subjective tests, efficient sampling strategies are required to determine which QoS configurations are to be queried, that is, evaluated by subjects. In this study, we extend the IQX model proposed by [M. Fiedler, T. Hosfeld, and P. Tran-Gia, “A generic quantitative relationship between quality of experience and quality of service,” IEEE Netw. , vol. 24, no. 2, pp. 36–41, Mar./Apr.2010.] toward a multidimensional QoS–QoE model (MIQX). To explore the complicated interaction between QoS factors more efficiently, we develop active learning algorithms for the multidimensional QoE model. Then, we conduct comprehensive experiments to compare the effectiveness of applying different sampling methods to crowdsourced video quality assessment tasks. In offline experiments that assume annotators give the same scores after changing the querying order, we demonstrate that active learning performs best and that a space-filling algorithm performs significantly better than random sampling. However, when we analyze the performance of the active sampling approaches more deeply using a novel field experiment, we observe that the active learning algorithms, which have been shown to be effective in the offline setting, can fail due to the habituation effect and individual differences of annotators. The active learning methods can also succeed when these issues are mitigated. These findings suggest that simply simulating the sample acquisition order, which is widely adopted in previous active learning literature [2] – [5] , is not sufficient for multimedia quality assessment tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call