Edge caching in the Internet of Vehicles (IoV) can reduce backhaul strain and content access delay. However, due to the constant changes in vehicle requests, offloading applications to edge servers is crucial for efficiently anticipating and caching popular content. Additionally, conventional data-sharing techniques are inadequate for this task due to their inability to preserve the privacy of vehicular users (VU). To overcome these issues, we propose a cooperative proactive content caching system incorporating Asynchronous federated learning and Deep reinforcement learning named PCAD that leverages the strengths of Dueling Deep Q-Networks and Prioritized Experience Replay in vehicular edge computing. PCAD lowers the latency of content access by prefetching contents that are popular beforehand caching them on edge nodes and cutting the waiting time for every vehicle to complete training as well as uploading local models before updating the global model. Additionally, we investigate intelligent caching decisions based on content prediction. Comprehensive experimental evaluations indicate that our proposed approach significantly outperforms existing benchmark caching techniques. More specifically, our suggested approach works better than DDQN, c-ϵ-greedy, and PCAD without DRL methods and the cache hit rate improves by approximately 4.25%, 11.23%, and 25.82%, respectively, as the cache capacity hits 400 MB.
Read full abstract