Abstract
In order to improve the edge caching efficiency of the fog radio access network (F-RAN), this paper put forward a distributed deep Q-learning-based content caching scheme based on user preference prediction and content popularity prediction. Given that the constraint that the storage capacity of each device is limited, and the optimization problem is formulated so as to maximize the caching hit rate. Specifically, by taking users’ selfishness into consideration, user preference is predicted in an offline manner by applying popular topic models. Then, the online predicted content popularity is achieved by combining the network topology relationship together with the obtained user preference. Finally, with the predicted user preference and content popularity, the deep Q-learning network (DQN)-based content caching algorithm is proposed to achieve the optimal content caching strategy. Moreover, we further present a content update policy with user preference and content popularity prediction, so that the proposed algorithm can handle the variations of contents popularity in a timely manner. Simulation results demonstrate that the proposed scheme achieves better caching hit rate compared with existing algorithms.
Highlights
In recent years, with the rapid development of Internet of Things (IoT) and the proliferation of smart terminals, mobile users are increasingly craving for high-quality network services with high quality of experience (QoE) which the users are willing to pay more
In [15], user mobility and content popularity are predicted by the echo state networks (ESNs) and a deep Q-learning network (DQN) based algorithm is used to optimize the content distribution problem
Since the problem is NP-hard, instead of exhaustive searching methods, we propose a DQN based algorithm to solve this issue
Summary
With the rapid development of Internet of Things (IoT) and the proliferation of smart terminals, mobile users are increasingly craving for high-quality network services with high quality of experience (QoE) which the users are willing to pay more. In [15], user mobility and content popularity are predicted by the echo state networks (ESNs) and a deep Q-learning network (DQN) based algorithm is used to optimize the content distribution problem. This paper proposes a distributed deep Q-learning based content caching strategy considering user preference and content popularity prediction. (3) Content update strategy:By setting a specific update time, we consider the real-time content update optimization strategy which combines user preference, content population and deep Q-learning together so as to improve the caching hit rate. THE PROPOSED OPTIMIZED CACHING POLICY we first model user preference with the topic model and predict content popularity with user preference. We utilize user preference and content popularity with DQN to obtain optimal caching status matrix and get the optimal caching strategy
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.