Abstract

Dynamic adaptive video streaming over hypertext transfer protocol (DASH) plays a key role in video transmission over the Internet. The conventional DASH adaptation approaches mainly focus on optimizing the overall quality of experience (QoE) for all client sides, neglecting the QoE diversity of different users. In this paper, we propose a meta-learning framework for multi-user preferences (MLMP) as a new DASH adaptation approach, which is able to optimize the diverse QoE of different users. Specifically, we first design a subjective experiment to analyze the difference of QoE preferences across users, in which QoE refers to the metrics of visual quality, fluctuation, and rebuffering events. Based on our findings, we formulate the QoE optimization of multi-user preferences as a multi-task deep reinforcement learning (DRL) problem. In our formulation, the QoE preference of each user is modeled in the overall QoE calculation via assigning the weights to the three QoE metrics. Then, the MLMP framework is developed to solve the proposed multi-task DRL problem, such that the preferences regarding visual quality, fluctuation, and rebuffering events can be optimized for different users in DASH adaptation. Finally, the simulation results show that the proposed approach outperforms state-of-the-art DASH adaptation approaches in satisfying the different users' QoE preferences regarding visual quality, fluctuation, and rebuffering events.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call