The development of Industrial Internet of Things (IIoT) and Industry 4.0 has completely changed the traditional manufacturing industry. Intelligent IIoT technology usually involves a large number of intensive computing tasks. Resource-constrained IIoT devices often cannot meet the real-time requirements of these tasks. As a promising paradigm, the mobile-edge computing (MEC) system migrates the computation intensive tasks from resource-constrained IIoT devices to nearby MEC servers, thereby obtaining lower delay and energy consumption. However, considering the varying channel conditions as well as the distinct delay requirements for various computing tasks, it is challenging to coordinate the computing task offloading among multiple users. In this article, we propose an autonomous partial offloading system for delay-sensitive computation tasks in multiuser IIoT MEC systems. Our goal is to provide offloading services with minimum delay for better Quality of Service (QoS). Enlighten by the recent advancement of reinforcement learning (RL), we propose two RL-based offloading strategies to automatically optimize the delay performance. Specifically, we first implement the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula> -learning algorithm to provide a discrete partial offloading decision. Then, to further optimize the system performance with more flexible task offloading, the offloading decisions are given as continuous based on deep deterministic policy gradient (DDPG). The simulation results show that the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula> -learning scheme reduces the delay by 23%, and the DDPG scheme reduces the delay by 30%.
Read full abstract