Abstract

With the rapid development of Internet of vehicles (IoV) and the continuous emergence of vehicle information applications, the demand for content in vehicle networking is growing at an alarming speed. Mobile vehicular edge caching is regarded as a promising technology in improving Quality of Service (QoS) and reducing latency. Many caching algorithms have been proposed, which usually place contents in the Road Side Units (RSUs) to provide services to users near them. However, due to the high-speed movement of vehicles and limitations of RSU coverage, caching interrupts often occur frequently, leading to a deterioration in service quality. To deal with this problem, we make full use of Vehicle-to-Vehicle (V2V) collaboration to construct a caching system which does not require RSU support, and propose a Recursive Deep Reinforcement Learning based Collaborative Caching Relay strategy (RDRL-CR). On purpose to minimize the service delay under capacity constraints, the caching problem is formulated as an integer linear programming problem, and caching decisions are achieved through partially observable Markov Decision Process (MDP). Specifically, this strategy utilizes a Graph Neural Network (GNN) to predict vehicle trajectories, and then selects vehicles that can serve as caching nodes by calculating link stability metrics between vehicles. The Long Short Term Memory (LSTM) network is embedded into a deep deterministic strategy gradient algorithm to achieve the final caching decision. Compared with existing caching strategies, the proposed caching strategy in this paper improves the caching hit rate by about 25% and reduces content access latency by about 20%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call