Abstract

The infrastructure for multimedia content delivery has been using more and more edge infrastructure (e.g., base stations, smart routers, etc.), which not only alleviates the centralized servers but also improves the quality of service by letting users access content nearby. Algorithms based on deep reinforcement learning (DRL) have been widely adopted by such edge cache replacement strategies due to their capability to adapt to changing request patterns. However, a DRL cache replacement agent learns extremely slow at an edge cache because of the sparse requests. In this paper, we propose a popularity distillation framework that allows edge caches to refer to content replication strategies of other edge caches. First, we design a collaborative edge cache framework that lets edge caches learn their strategies by handling the local requests using deep reinforcement learning and learn from others by exchanging the "soft" popularity distributions experienced by different edge caches. Second, we design a neighbor maintenance mechanism in which an agent iteratively selects only a small number of neighboring edge caches to perform the collaboration. Experiments driven by a real-world mobile video dataset show that our design can improve the cache hit rate by 3.0% compared with a non-popularity distillation baseline with only a small overhead of transmission data during distillation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call