Abstract

Industrial applications, such as real-time manufacturing, fault classification and inference, autonomous cars, etc., are data-driven applications that require machine learning with a wealth of data generated from industrial Internet of Things (IoT) devices. However, conventional approaches of transmitting this rich data to a remote data center to learn may be undesired due to the non-negligible network transmission delay and the sensitiveness of data privacy. By deploying a number of computing-capable devices at the network edge, edge computing supports the implementation of machine learning close to the industrial environment. Considering the heterogeneous computing capability as well as network location of edge devices, there are two types of feasible edge computing based machine learning models, including the centralized learning and federated learning models. In centralized learning, a resource-rich edge server aggregates the data from different IoT devices and performs machine learning. In federated learning, distributed edge devices and a federated server collaborate to perform machine learning. The features that data should be offloaded in centralized learning while it is locally trained in federated learning make centralized learning and federated learning quite different. We study the computation offloading problem for edge computing based machine learning in an industrial environment, considering the abovementioned machine learning models. We formulate a machine learning-based offloading problem with the goal of minimizing the training delay. Then, an energy-constrained delay-greedy (ECDG) algorithm is designed to solve the problem. Finally, simulation studies based on the MNIST dataset have been conducted to illustrate the efficiency of the proposal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call