Abstract

We investigate a distributed caching strategy based on multi-agent reinforcement learning (MARL) in a cache-aided network, where all wireless nodes have limited storage capacity and serve for certain coverage. The wireless nodes can collaboratively optimize distributed caching strategy to maximize the network performance measured by the average cache hit probability. Specifically, we firstly model the distributed caching strategy problem as a fully cooperative repeated game and then analyze how to improve the average cache hit probability under the MARL framework. We further propose the caching strategy based on the frequency maximum Q-value (FMQ) and the caching strategy based on the distributed Q-learning (DQ) to optimize the distributed caching strategy. The simulation results show that the proposed FMQ-based strategy significantly improves the average cache hit probability, while the proposed DQ-based strategy can converge to the optimal strategy with probability one. Moreover, the proposed FMQ-based and DQ-based strategies are not only superior to Q-learning based strategy but also superior to the probabilistic caching placement (PCP) and most popular content (MPC) strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.