Abstract

In this paper we present an integration of several user and resource-related factors for the design of dynamic adaptation techniques. Our first contribution is an original reinforcement-learning approach to develop better adaptation agents. Integrated with the content, these agents improve gradually, by taking into account both user's behaviour and the usage context. Our second contribution is to apply this generic approach to solve an ubiquitous streaming problem. Mobile users experience large latencies while accessing streaming media. We propose to adapt the streaming by prefetching and to model this decision problem by using a Markov decision process. We discuss this formal framework and make explicit reference to its relationship with reinforcement learning. We support the benefits of our approach by presenting results from simulations and experiments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.