Abstract

In this paper, cooperative edge caching problem is investigated in fog radio access networks (F-RANs). By considering the non-deterministic polynomial hard (NP-hard) property of this problem, a federated deep reinforcement learning (FDRL) framework is put forth to learn the content caching strategy. Then, in order to overcome the dimensionality curse of reinforcement learning and improve the overall caching performance, we propose a dueling deep Q-network based cooperative edge caching method to find the optimal caching policy in a distributed manner. Furthermore, horizontal federated learning (HFL) is applied to address issues of over-consumption of resources during distributed training and data transmission process. Compared with three classical content caching methods and two reinforcement learning algorithms, simulation results show the superiority of our proposed method in reducing the content request delay and improving the cache hit rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call