Abstract
In this paper, the cooperative edge caching problem in fog radio access networks (F-RANs) is investigated. To minimize the content transmission delay, we formulate the cooperative caching optimization problem to find the globally optimal caching strategy. By considering the non-deterministic polynomial hard (NP-hard) property of this problem, a Multi Agent Reinforcement Learning (MARL)-based cooperative caching scheme is proposed. Our proposed scheme applies a double deep Q-network (DDQN) in every fog access point (F-AP), and introduces the communication process in a multi-agent system. Every F-AP records the historical caching strategies of its associated F-APs as the observations of communication procedure. By exchanging the observations, F-APs can leverage the cooperation and make the globally optimal caching strategy. Simulation results show that the proposed MARL-based cooperative caching scheme has remarkable performance compared with the benchmark schemes in minimizing the content transmission delay.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.