Abstract

Edge caching has been regarded as a promising technique for low-latency, high-rate data delivery in future networks, and there is an increasing interest to leverage Machine Learning (ML) for better content placement instead of traditional optimization-based methods due to its self-adaptive ability under complex environments. Despite many efforts on ML-based cooperative caching, there are still several key issues that need to be addressed, especially to reduce computation complexity and communication costs under the optimization of cache efficiency. To this end, in this paper, we propose an efficient cooperative caching (FDDL) framework to address the issues in mobile edge networks. Particularly, we propose a DRL-CA algorithm for cache admission, which extracts a boarder set of attributes from massive requests to improve the cache efficiency. Then, we present an lightweight eviction algorithm for fine-grained replacements of unpopular contents. Moreover, we present a Federated Learning-based parameter sharing mechanism to reduce the signaling overheads in collaborations. We implement an emulation system and evaluate the caching performance of the proposed FDDL. Emulation results show that the proposed FDDL can achieve a higher cache hit ratio and traffic offloading rate than several conventional caching policies and DRL-based caching algorithms, and effectively reduce communication costs and training time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.