Abstract
Integrating mobile edge computing (MEC) with small cell networks has been conceived as a promising solution to provide pervasive computing services. However, the interactions among small cells due to inter-cell interference, the diverse application-specific requirements, as well as the highly dynamic wireless environment make it challenging to design an optimal computation offloading scheme. In this paper, we focus on the joint design of computation offloading and interference coordination for edge intelligence empowered small cell networks. To this end, we propose a distributed multi-agent deep reinforcement learning (DRL) scheme with the objective of minimizing the overall energy consumption while ensuring the latency requirements. Specifically, we exploit the collaboration among small cell base station (SBS) agents to adaptively adjust their strategies, considering computation offloading, channel allocation, power control, and computation resource allocation. Further, to decrease the computation complexity and signaling overhead of the training process, we design a federated DRL scheme which only requires SBS agents to share their model parameters instead of local training data. Numerical results demonstrate that our proposed schemes can significantly reduce the energy consumption and effectively guarantee the latency requirements compared with the benchmark schemes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.