Abstract

Serverless computing, especially implemented through Function-as-a-Service (FaaS) platforms, has recently been gaining popularity as an application deployment model in which functions are automatically instantiated when called and scaled when needed. When a warm start deployment mode is used, the FaaS platform gives users the perception of constantly available resources. Conversely, when a cold start mode is used, containers running the application’s modules are automatically destroyed when the application has been executed. The latter can lead to considerable resource and cost savings. In this paper, we explore the suitability of both modes for deploying Internet of Things (IoT) applications considering a low resources testbed comparable to an edge node. We discuss the implementation and the experimental analysis of an IoT serverless platform that includes typical IoT service elements. A performance study in terms of resource consumption and latency is presented for the warm and cold start deployment mode, and implemented using OpenFaaS, a well-known open-source FaaS framework which allows to test a cold start deployment with precise inactivity time setup thanks to its flexibility. This experimental analysis allows to evaluate the aptness of the two deployment modes under different operating conditions: Exploiting OpenFaaS minimum inactivity time setup, we find that the cold start mode can be convenient in order to save edge nodes limited resources, but only if the data transmission period is significantly higher than the time needed to trigger containers shutdown.

Highlights

  • Internet of Things (IoT) is a popular expression widely used to encompass the many related networking and application aspects

  • Data Monitoring and Processing (Figure 4): When a message is published in a topic (1), the Emitter broker, via OpenFaaS native Message Queuing Telemetry Transport (MQTT) connector (2), invokes the serverless monitoring function subscribed to that topic, sending the message of interest as input

  • In order to extract more general results from the current set of experiments, we plot the service time as a function of normalized transmission period, that is the ratio between the actual data transmission period from sensors (Tsensor ) and the idle time configured in OpenFaas Tidle

Read more

Summary

Introduction

Internet of Things (IoT) is a popular expression widely used to encompass the many related networking and application aspects. FaaS is intended to be different from software as a service (SaaS) because of its event-based architecture and auto-scalability, emphasizing its virtual independence from the underlying servers (i.e., serverless) This is in contrast to traditional methods in which the needed resources are planned during application design and provided during the deployment phase. Instead of deploying a whole platform or an application in the cloud servers, by using FaaS just functions are required, as components of complex applications Such functions can be loaded as virtual containers when needed, on demand, Sensors 2021, 21, 928 and possibly in parallel, without any need for controlling application deployment processes at the operating-system level. Through the use of advanced virtualization technologies, FaaS can ensure that the volume of resources consumed by an application is dynamically controlled and is tailored to the actual computing needs This is clearly a valuable feature for short and on-demand tasks.

Related Work
System Architecture
Test Environment and Research Methodology
Service Time
Resource Consumption
Findings
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call