Abstract

As is widely accepted, mobile edge computing (MEC) is a promising technology to enable wireless devices (WDs) to process computation-intensive tasks. Due to the mutual influence among different WDs, collaborative task offloading is needed in multi-agent environments. In this paper, a multi-agent MEC network with delay-sensitive and non-partitionable tasks is considered, as well as the load on MEC servers. The joint optimization problem of offloading decision and resource allocation is formulated to minimize the average delay. To realize the collaborative decision-making, a multi-agent deep reinforcement learning based algorithm is proposed based on the framework of centralized training and decentralized execution. The centralized deep neural networks (DNN) learn from the past experience and the WDs learn policies from the evaluation of the actions from the centralized DNNs. Based on the learned policies, WDs can make offloading decisions with only local information. Simulation results show that the proposed algorithm achieves near-optimal performance and has the advantage of high stability in varying multi-agent environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call