Abstract
This paper proposes a novel decentralized federated reinforcement learning (DFRL) framework that integrates deep reinforcement learning (DRL) with decentralized federated learning (DFL). The DFRL framework boosts efficient virtual instance scaling in Mobile Edge Computing (MEC) environments for 5G core network automation. It enables multiple MECs to collaboratively optimize resource allocation without centralized data sharing. In this framework, DRL agents in each MEC make local scaling decisions and exchange model parameters with other MECs, rather than sharing raw data. To enhance robustness against malicious server attacks, we employ a committee mechanism that monitors the DFL process and ensures reliable aggregation of local gradients. Extensive simulations were conducted to evaluate the proposed framework, demonstrating its ability to maintain cost-effective resource usage while significantly reducing blocking rates across diverse traffic conditions. Furthermore, the framework demonstrated strong resilience against adversarial MEC nodes, ensuring reliable operation and efficient resource management. These results validate the framework's effectiveness in adaptive and efficient resource management, particularly in dynamic and varied network scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.