Abstract

Performing deep neural network (DNN) inference in real time requires excessive network resources, which poses a big challenge to the resource-limited industrial Internet of things (IIoT) networks. To address the challenge, in this paper, we introduce an end-edge-cloud orchestration architecture, in which the inference task assignment and DNN model placement are flexibly coordinated. Specifically, the DNN models, trained and pre-stored in the cloud, are properly placed at the end and edge to perform DNN inference. To achieve efficient DNN inference, a multi-dimensional resource management problem is formulated to maximize the average inference accuracy while satisfying the strict delay requirements of inference tasks. Due to the mix-integer decision variables, it is difficult to solve the formulated problem directly. Thus, we transform the formulated problem into a Markov decision process which can be solved efficiently. Furthermore, a deep reinforcement learning based resource management scheme is proposed to make real-time optimal resource allocation decisions. Simulation results are provided to demonstrate that the proposed scheme can efficiently allocate the available spectrum, caching, and computing resources, and improve average inference accuracy by 31.4$\%$ compared with the deep deterministic policy gradient benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call