Abstract

To address the drastic increase of multimedia traffic dominated by streaming videos, mobile edge computing (MEC) can be exploited to accelerate the development of intelligent caching at mobile network edges to reduce redundant data transmissions and improve content delivery performance. Under the MEC architecture, content providers (CPs) can access MEC servers to deploy popular content items to improve users' quality of experience. Designing an efficient caching policy is crucial for CPs due to the content dynamics, unknown spatial-temporal traffic demands and limited storage capacity. The knowledge of users' preference is important for efficient caching, but is also often unavailable in advance. Machine learning can be used to learn the users' preference based on historical demand information and decide the content items to be cached at the MEC servers. In this paper, we propose a learning based cooperative content caching policy for the MEC architecture, when the users' preference is unknown and only the historical content demands can be observed. We model the cooperative content caching problem as a multi-agent multi-armed bandit problem and propose a multiagent reinforcement learning (MARL)-based algorithm to solve the problem. Simulation experiments are conducted based on the real dataset from MovieLens and the numerical results show that the proposed MARL-based caching policy can significantly improve content cache hit rate and reduce content downloading latency in comparison with other popular caching strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call