Abstract

Machine learning (ML) plays a key role in Intelligent Industrial Internet of Things (IIoT) applications. Processing of the computation-intensive ML tasks can be largely enhanced by applying edge computing (EC) to traditional cloud-based schemes. System optimizations in the existing works always ignore the inference accuracy of ML models with different complexities, and their impacts on error task inference. In this article, we propose a joint task offloading and resource allocation scheme for accuracy-aware machine-learning-based IIoT applications in an edge–cloud-based network architecture. We aim at minimizing the long-term average system cost affected by the task offloading, computing resource allocation, and inference accuracy of the ML models deployed on the sensors, edge server, and cloud server. The Lyapunov optimization technique is applied to convert the long-term stochastic optimization problem into a short-term deterministic problem. An optimal algorithm based on the general Benders decomposition (GBD) technology and a heuristic algorithm based on proportional computing resource allocation and task offloading strategy comparison are proposed to efficiently solve the problem, respectively. The performance of our scheme is proved by theoretical analysis and evaluated by extensive simulations conducted in multiple scenarios. Simulation results demonstrate the effectiveness and superiority of our two algorithms in comparison with several other schemes proposed by the existing works.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call