Abstract

Multi-access edge computing (MEC) enables end devices with limited computing power to provide effective solutions while dealing with tasks that are computationally challenging. When each end device in an MEC scenario generates multiple tasks, how to reasonably and effectively schedule these tasks is a large-scale discrete action space problem. In addition, how to exploit the objectively existing spatial structure relationships in the given scenario is also an important factor to be considered in task-scheduling algorithms. In this work, we consider indivisible, time-sensitive tasks under this scenario and formalize the task-scheduling problem to minimize the long-term losses. We propose a multiagent collaborative deep reinforcement learning (DRL)-based distributed scheduling algorithm based on graph attention neural networks (GATs) to solve task-scheduling problems in the MEC scenario. Each end device creates a graph representation agent to extract potential spatial features in the scenario and a scheduling agent to extract the timing-related features of the tasks and make scheduling decisions using a gated recurrent unit (GRU). The simulation results show that, compared with several baseline algorithms, our proposed algorithm can take advantage of the spatial positional relationship of devices in the environment, significantly reduce the average delay and drop rate, and improve link utilization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call