Abstract

In HTTP-based adaptive streaming systems, media server simply stores video content segmented into a series of small chunks coded in different qualities and sizes. The decision for next chunk's quality level to achieve a high quality viewing experience is left to the client which is a challenging task, especially in mobile environment due to unexpected changes in network bandwidth. Using computer simulations, previous work has demonstrated that Markov decision process (MDP) is very effective for such decision making and that it can reduce video freezing or re-buffering events drastically compared to other methods of adaptation. However, to date there has been no practical implementation and evaluation of MDP-based DASH players. In this work, we extend a publicly available DASH player recently released by DASH industry forum to realise a real DASH player that implements MDP-based video adaptation. We implement two alternative MDP optimisation algorithms, value iteration and Q learning and evaluate their performances in real driving conditions under 300 minutes of video streaming. Our results show that value iteration and Q learning reduce video freezing by a factor of 8 and 11, respectively, compared to the default decision making algorithm implemented in the public DASH player.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.