Abstract

Edge computing is a new paradigm to provide computing capability at the edge servers close to end devices. A significant research challenge in edge computing is finding efficient task offloading to edge and cloud servers considering various task characteristics and limited network and server resources. Several reinforcement learning (RL)-based task-offloading methods have been developed, because RL can immediately output efficient offloading by pre-learning. However, these methods do not take into account clouds or focus only on a single cloud. They also do not take into account the bandwidth and topology of the backbone network. Such shortcomings strongly limit the range of applicable networks and degrade task-offloading performance. Therefore, we formulate a task-offloading problem for multi-cloud and multi-edge networks considering network topology and bandwidth constraints. We also propose a task-offloading method that is based on cooperative multi-agent deep RL (Coop-MADRL). This method introduces a cooperative multi-agent technique through centralized training and decentralized execution, improving task-offloading efficiency. Simulations revealed that the proposed method can minimize network utilization and task latency while minimizing constraint violations in less than one millisecond in various network topologies. It also shows that cooperative learning improves the efficiency of task offloading. We demonstrated that the proposed method has generalization performance for various task types by pre-training with many resource-consuming tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call