Abstract

With the explosive increase in mobile data traffic generated by various application services like video-on-demand and stringent quality of experience requirements of users, mobile edge caching is a promising paradigm to reduce delivery latency and network congestions by serving content requests locally. However, how to conduct cache replacement when the cache is full is a challenging issue when faced with enormous content volume and limited cache capacity at the network edge while the future request pattern is unknown ahead. In this paper, we propose a cache replacement algorithm based on the oracle approximation named OA-Cache in an end-to-end manner to maximize the cache hit rate. Specifically, we construct a complex model that uses a temporal convolutional network to capture the long and short dependencies between content requests. Then, an attention mechanism is adopted to find out the correlations between requests in the sliding window and cached contents. Instead of training a policy to mimic Belady that evicts the content with the longest reuse distance, we cast the learning task into a classification model to distinguish unpopular contents from popular ones. Finally, we apply the knowledge distillation approach to assist in transferring knowledge from a large pre-trained complex network to a lightweight network to readily accommodate to the network edge scenario. To validate the effectiveness of OA-Cache, we conduct extensive experiments on real-world datasets. The evaluation results demonstrate that OA-Cache can achieve better performance compared to candidate algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call