Abstract

Edge caching is a promising method to deal with the traffic explosion problem towards future network. In order to satisfy the demands of user requests, the contents can be proactively cached locally at the proximity to users (e.g., base stations or user device). Recently, some learning-based edge caching optimizations are discussed. However, most of the previous studies explore the influence of dynamic and constant expanding action and caching space, leading to unpracticality and low efficiency. In this paper, we study the edge caching optimization problem by utilizing the Double Deep Q-network (Double DQN) learning framework to maximize the hit rate of user requests. Firstly, we obtain the Device-to-Device (D2D) sharing model by considering both online and offline factors and then we formulate the optimization problem, which is proved as NP-hard. Then the edge caching replacement problem is derived by Markov decision process (MDP). Finally, an edge caching strategy based on Double DQN is proposed. The experimental results based on large-scale actual traces show the effectiveness of the proposed framework.

Highlights

  • With the development of network services and the sharp increasing of mobile devices, severe traffic pressure posed an urgent demand of network operator to explore the effective paradigm towards 5G

  • In order to evaluate the performance of our caching strategy, we compared it with three classic cache replacement algorithms

  • Soon the hit rate increased and stabilized eventually. This is because our reward function is used to increase the cache hit rate; our DRL agent is dedicated to maximizing the system hit rate

Read more

Summary

Introduction

With the development of network services and the sharp increasing of mobile devices, severe traffic pressure posed an urgent demand of network operator to explore the effective paradigm towards 5G. Deviceto-Device (D2D) content sharing is an effective method to reduce mobile network traffic. In this way, users can download required content from nearby devices and enjoy data services with low access latency [2], which can improve their service qualities (QoS). In order to design an efficient caching strategy in mobile networks, we need to achieve the statistical information of the user requests and sharing activities by system learning from the extreme volume of mobile traffic. A learning-based method is proposed to jointly optimize the mobile content sharing and caching [4, 5]. The authors of [6] calculated the minimum unload loss according to user’s request interval and explored content caching of small base station (SBSs). Traditional RL technology is not feasible for the mobile network environment with large state space

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call