Abstract

With the explosive growth of content request services in the vehicle network, there is an urgent need to speed up the response process of content requests and reduce the backhaul burden on base stations. However, most traditional content caching strategies only consider the content popularity or cluster-based caching strategies individually, and the access paths are fixed. This paper proposes a collaborative caching strategy for reinforcement learning (RL) based content downloading. Specifically, the vehicles are first clustered by the K-means algorithm, and the content transmission distance is reduced by caching the content with high popularity in the cluster head. Then, according to the historical content request information, the long short-term memory is used to predict the popularity of content. The content with high popularity will be collaboratively cached in the base station and cluster heads. Finally, the content downloading problem can be described as a Markov decision process, using a deep reinforcement learning algorithm, Deep Q Network (DQN), to solve the target problem which is to minimize the weighted cost, including the downloading delay and failure cost. With the DQN algorithm, the cluster head can make the access decision for the content request. The proposed collaborative caching strategy for the RL-based content downloading algorithm can greatly reduce the response process and the burden at the base station. The simulation results show that the proposed RL-based method achieved outstanding performance to improve the access hit ratio and reduce the content downloading delay.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call