Abstract

As is widely accepted, mobile edge computing (MEC) is a promising technology to enable wireless devices (WDs) to process computation-intensive tasks. Due to the mutual influence among different WDs, collaborative task offloading is needed in multi-agent environments. In this paper, a multi-agent MEC network with delay-sensitive and non-partitionable tasks is considered, as well as the load on MEC servers. The joint optimization problem of offloading decision and resource allocation is formulated to minimize the average delay. To realize the collaborative decision-making, a multi-agent deep reinforcement learning based algorithm is proposed based on the framework of centralized training and decentralized execution. The centralized deep neural networks (DNN) learn from the past experience and the WDs learn policies from the evaluation of the actions from the centralized DNNs. Based on the learned policies, WDs can make offloading decisions with only local information. Simulation results show that the proposed algorithm achieves near-optimal performance and has the advantage of high stability in varying multi-agent environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.