Abstract

In order to improve the edge caching efficiency of the fog radio access network (F-RAN), this paper put forward a distributed deep Q-learning-based content caching scheme based on user preference prediction and content popularity prediction. Given that the constraint that the storage capacity of each device is limited, and the optimization problem is formulated so as to maximize the caching hit rate. Specifically, by taking users’ selfishness into consideration, user preference is predicted in an offline manner by applying popular topic models. Then, the online predicted content popularity is achieved by combining the network topology relationship together with the obtained user preference. Finally, with the predicted user preference and content popularity, the deep Q-learning network (DQN)-based content caching algorithm is proposed to achieve the optimal content caching strategy. Moreover, we further present a content update policy with user preference and content popularity prediction, so that the proposed algorithm can handle the variations of contents popularity in a timely manner. Simulation results demonstrate that the proposed scheme achieves better caching hit rate compared with existing algorithms.

Highlights

  • In recent years, with the rapid development of Internet of Things (IoT) and the proliferation of smart terminals, mobile users are increasingly craving for high-quality network services with high quality of experience (QoE) which the users are willing to pay more

  • In [15], user mobility and content popularity are predicted by the echo state networks (ESNs) and a deep Q-learning network (DQN) based algorithm is used to optimize the content distribution problem

  • Since the problem is NP-hard, instead of exhaustive searching methods, we propose a DQN based algorithm to solve this issue

Read more

Summary

INTRODUCTION

With the rapid development of Internet of Things (IoT) and the proliferation of smart terminals, mobile users are increasingly craving for high-quality network services with high quality of experience (QoE) which the users are willing to pay more. In [15], user mobility and content popularity are predicted by the echo state networks (ESNs) and a deep Q-learning network (DQN) based algorithm is used to optimize the content distribution problem. This paper proposes a distributed deep Q-learning based content caching strategy considering user preference and content popularity prediction. (3) Content update strategy:By setting a specific update time, we consider the real-time content update optimization strategy which combines user preference, content population and deep Q-learning together so as to improve the caching hit rate. THE PROPOSED OPTIMIZED CACHING POLICY we first model user preference with the topic model and predict content popularity with user preference. We utilize user preference and content popularity with DQN to obtain optimal caching status matrix and get the optimal caching strategy

USER PREFERENCE PREDICTION
CONTENT POPULARITY PREDICTION
THE PROPOSED DQN BASED CACHING ALGORITHM
THE PROPOSED CONTENT UPDATE POLICY
DELAY ANALYSIS OF CONTENT UPDATE ALGORITHM
SIMULATIONS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call