Abstract
Edge computing is a new paradigm to provide computing capability at the edges close to end devices. A significant research challenge in edge computing is finding an efficient task offloading to edge and cloud servers, considering various task characteristics and limited network and server resources. Several studies have proposed the reinforcement learning (RL) based task offloading method, because RL can immediately output the efficient offloading by pre-learning. However, due to the performance problem of RL, these previous studies do not consider clouds or focus only on a single cloud. They also do not consider the bandwidth and topology of the backbone network. Such shortcomings could lead to degrading the performance of task offloading. Therefore, we formulated a task offloading problem for multi-cloud and multi-edge networks, considering network topology and bandwidth constraints. Moreover, we proposed a task offloading method based on cooperative multi-agent deep reinforcement learning (Coop-MADRL) to solve the performance problem of RL. This method introduces a cooperative multi-agent technique through centralized training and decentralized execution, improving the efficiency of task offloading. Simulations revealed that the proposed method drastically reduces the average latency while satisfying all constraints, compared with the greedy approach. It also revealed that the proposed cooperative learning method improves the efficiency of task offloading.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.