Abstract

Network applications involve massive heterogeneous data fusion and analysis. Artificial intelligence can significantly improve the convenience and user experience, but it requires a lot of storage, bandwidth, and computing resources. Multiaccess edge computing (MEC) extends intelligence services to IoT devices through offloading approaches and joint processing, which solves the resource bottleneck. However, designing advanced collaboration technology to offload tasks to MEC servers is still challenging. Heuristic algorithms and deep reinforcement learning (DRL)-based approaches have been proposed to offload tasks and minimize application latency. However, heuristic algorithms heavily depend on accurate mathematical models for the MEC system, and DRL does not make fair use of the relationship between devices in the MEC graph. To solve this, we propose a task offloading mechanism based on graph neural network (GNN), which can directly learn on graph data with messages passing and aggregation. We propose a graph reinforcement learning-based offloading (GRLO) framework, which models MEC as an acyclic graph and the offloading policy by graph state migration. GRLO combines GNN with the actor-critic network and trains offloading decision makers without labels. To efficiently train the GRLO, we propose a method that quickly explores action space and approaches the optimal solution. The numerical results show that the GRLO has lower latency compared to baselines while having generalization ability to new environments and topologies. Moreover, we verified the effectiveness of GRLO on a prototype.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call