Abstract

Edge computing is an emerging promising computing paradigm that brings computation and storage resources to the network edge, hence significantly reducing the service latency and network traffic. In edge computing, many applications are composed of dependent tasks where the outputs of some are the inputs of others. How to offload these tasks to the network edge is a vital and challenging problem which aims to determine the placement of each running task in order to maximize the Quality-of-Service (QoS). Most of the existing studies either design heuristic algorithms that lack strong adaptivity or learning-based methods but without considering the intrinsic task dependency. Different from the existing work, we propose an intelligent task offloading scheme leveraging off-policy reinforcement learning empowered by a Sequence-to-Sequence (S2S) neural network, where the dependent tasks are represented by a Directed Acyclic Graph (DAG). To improve the training efficiency, we combine a specific off-policy policy gradient algorithm with a clipped surrogate objective. We then conduct extensive simulation experiments using heterogeneous applications modelled by synthetic DAGs. The results demonstrate that: 1) our method converges fast and steadily in training; 2) it outperforms the existing methods and approximates the optimal solution in latency and energy consumption under various scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call