Abstract

Containers virtually package a piece of software and share the host Operating System (OS) upon deployment. This makes them notably light weight and suitable for dynamic service deployment at the network edge and Internet of Things (IoT) devices for reduced latency and energy consumption. Data collection, computation, and now intelligence is included in variety of IoT devices which have very tight latency and energy consumption conditions. Recent studies satisfy latency condition through containerized services deployment on IoT devices and gateways. They fail to account for the limited energy and computing resources of these devices which limit the scalability and concurrent services deployment. This paper aims to establish guidelines and identify critical factors for containerized services deployment on resource constrained IoT devices. For this purpose, two container orchestration tools (i.e., Docker Swarm and Kubernetes) are tested and compared on a baseline IoT gateways testbed. Experiments use Deep Learning driven data analytics and Intrusion Detection System services, and evaluate the time it takes to prepare and deploy a container (creation time), Central Processing Unit (CPU) utilization for concurrent containers deployment, memory usage under different traffic loads, and energy consumption. The results indicate that container creation time and memory usage are decisive factors for containerized micro service architecture.

Highlights

  • Internet of Things (IoT) devices are becoming ubiquitous in our daily lives

  • 30 service containers, Firewall service took 157 s, Snort service takes 117 s, and Nginx service took 127 s. This concludes that in general an IDS service took approximately 4 min to be created on three IoT gateways. Another observation was that the increase in the creation time of all the IDS service was proportional to the increase in container instances, and it held if the container instances further increased

  • This paper presents performance analysis of different containers on resource constrained devices and compares the results of two orchestration systems namely Docker Swarm (DS) and

Read more

Summary

Introduction

IoT devices consist of sensors which collect data and send it to the cloud for further processing and analysis [1,2]. A major issue in these service scenarios is to promptly process the data in IoT devices under given latency constraints with low energy consumption. Computation and power resources in IoT devices are severely constrained and cannot meet the given latency requirements. In literature this problem is tackled by offloading the computation to fog devices which are deployed at the network edge under fog computing architecture [4,5,6]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call