Abstract

Content caching in vehicular networks is a promising technology to dramatically reduce the request-response time and transmission delay. The existing caching policies often suffer from high computation and communication overhead and fail to well capture the dynamics of the vehicular networks and content popularity. In this paper, we propose a novel Cooperative Caching algorithm for vehicular networks with multi-level federated Reinforcement Learning (named CoCaRL) to dynamically determine which contents should be replaced and where the content requests should be served. In CoCaRL, Deep Reinforcement Learning (DRL) is employed to optimize the cooperative caching policy between RoadSide Units (RSUs) of vehicular networks, while a federated learning framework applies to reduce the computation and communication overhead in a decentralized way. To speed-up the convergence rate, we also develop a two-level aggregation mechanism for federated learning, where the low-level aggregation is performed at the RSUs and the high-level aggregation is executed at a Global Aggregator (GA). Through extensive simulation experiments, we demonstrate that our algorithm can: 1) achieve a higher hit rate than four baseline algorithms, 2) converge faster than original federated reinforcement learning without multi-level aggregation, and 3) perform good adaptability to different cache capacities and content quantities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.