Abstract

To address the drastic growth of data traffic dominated by streaming of video-on-demand files, mobile edge caching/computing (MEC) can be exploited to develop intelligent content caching at mobile network edges to alleviate redundant traffic and improve content delivery efficiency. Under the MEC architecture, content providers (CPs) can deploy popular video files at MEC servers to improve users' quality of experience (QoE). Designing an efficient content caching policy is crucial for CPs due to the content dynamics, unknown spatial-temporal traffic demands, and limited service capacity. The knowledge of users' preference is very useful and important for efficient content caching, yet often unavailable in advance. Under this circumstance, machine learning can be used to learn the users' preference based on historical demand information and decide the video files to be cached at the MEC servers. In this paper, we propose a multi-agent reinforcement learning (MARL)-based cooperative content caching policy for the MEC architecture when the users' preference is unknown and only the historical content demands can be observed. We formulate the cooperative content caching problem as a multi-agent multi-armed bandit problem and propose a MARL-based algorithm to solve the problem. The simulation experiments are conducted based on a real dataset from MovieLens and the numerical results show that the proposed MARL-based cooperative content caching scheme can significantly reduce content downloading latency and improve content cache hit rate when compared with other popular caching schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call