Abstract
In this paper, we consider a three-layer distributed multi-access edge computing (MEC) network where multiple clouds, MEC servers, and edge devices (EDs) are deployed at the top layer, middle layer, and bottom layer, respectively. Each cloud center (CC) is associated with an independent service provider and publishes an application-driven computing task. To deliver the tasks, CCs rely on EDs to generate the raw data and offload part of the computing tasks to both EDs and MEC servers such that their computing and transmission resources can be fully utilized to reduce the system latency. However, in such a three-layer network, the distributed deployment of tasks leads to inevitable resource competition among CCs. To address this issue, we propose a distributed scheme based on multi-agent reinforcement learning, where each CC jointly determines the task offloading and resource allocation strategy based on its inference of other CCs' decisions. Simulation results indicate that a lower system latency is achieved via our proposed scheme compared with the existing schemes. In addition, the influence of the number of CCs, MEC servers, and EDs on latency performance is also discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.