Abstract

In this era, internet-of-things (IoT) deal with billions of edge devices potentially connected to each other. Maximum applications built on these edge devices generate a massive amount of online data and also require real-time computation and decision making with low latency (e.g., robotics/ drones, self-driving cars, smart IoT, electronics/ wearable devices). To suffice the requirement, the future generation intelligent edge devices need to be capable of computing complex machine learning algorithms on live data in real-time. Considering different layers of IoT and distributed computing concept, this paper suggests three different operational models where the ML algorithm will be executed in a distributed manner between the edge and cloud layer of IoT so that the edge node can take a decision in real-time. The three models are; model 1: training and prediction both will be done locally by the edge, model 2: training at the server and decision making at the edge node, and model 3: distributed training and distributed decision making at the edge level with global shared knowledge and security. All three models have been tested using support vector machine using thirteen diverse datasets to profile their performance in terms of both training and inference time. A comparative study between the computational performance of the edge and cloud nodes is also presented here. Through the simulated experiments using the different datasets, it is observed that, the edge node inference time is approximately ten times faster than cloud time for all tested datasets for each proposed model. At the same time, the model 2 training time is approximately nine times faster than model 1 and eleven times faster than model 3.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call