Abstract

In multi-access edge computing, when many mobile devices (MDs) offload their tasks to an edge server (ES), its resources might become constrained. These tasks may take a long time to complete or even be thrown away. Since the unknown information of both the ESs and other MDs, it is difficult for each MD to determine its offloading policy independently. Furthermore, most offloading methods have poor generalization to new environment since they focus on model architecture with a fixed quantity of MDs and ESs, preventing trained models from transferring to other environments. In the paper, we provide a full decentralized offloading scheme based on the Curriculum Attention-weighted Graph Recurrent Network-based Multi-Agent Actor-Critic (CAGR-MAAC). First, we build MEC as a shared MD agents-ESs graph and an AGR-based message network is designed to enable each MD aggregate the information of ESs and other MDs and solve the partial observability of MD agents for MEC system. Second, a learnable differentiable encoder network is introduced to construct MD agent’s local information encoding. Subsequently, the MD agent converts overall the information regarding the MEC system into a fixed-size embedding via an AGR Network to handle different quantity of MDs and ESs. Finally, we introduce curriculum learning to address the huge complexity of the MEC system and the training difficulties induced by the large amounts of MDs and ESs. Experiments demonstrate that compared with existing algorithms, CAGR-MAAC boosts task completion rates and decreases system costs by 13.01% 15.03% and 16.45% 18.56%, and can quickly adapt to the new environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call