Abstract

Cloud native programming and serverless architectures provide a novel way of software development and operation. A new generation of applications can be realized with features never seen before while the burden on developers and operators will be reduced significantly. However, latency sensitive applications, such as various distributed IoT services, generally do not fit in well with the new concepts and today's platforms. In this article, we adapt the cloud native approach and related operating techniques for latency sensitive IoT applications operated on public serverless platforms. We argue that solely adding cloud resources to the edge is not enough and other mechanisms and operation layers are required to achieve the desired level of quality. Our contribution is threefold. First, we propose a novel system on top of a public serverless edge cloud platform, which can dynamically optimize and deploy the microservice-based software layout based on live performance measurements. We add two control loops and the corresponding mechanisms which are responsible for the online reoptimization at different timescales. The first one addresses the steady-state operation, while the second one provides fast latency control by directly reconfiguring the serverless runtime environments. Second, we apply our general concepts to one of today's most widely used and versatile public cloud platforms, namely, Amazon's AWS, and its edge extension for IoT applications, called Greengrass. Third, we characterize the main operation phases and evaluate the overall performance of the system. We analyze the performance characteristics of the two control loops and investigate different implementation options.

Highlights

  • C LOUD native programming, microservices and serverless architectures provide a novel way of software development and operation

  • 1) We propose a novel system on top of public cloud platforms extended with edge resources which can dynamically optimize and deploy applications, following the microservice software architecture, based on live performance measurements

  • Observe that the number of calls depends on how many objects we found during the preprocessing stage which we consider as an application specific metric

Read more

Summary

INTRODUCTION

C LOUD native programming, microservices and serverless architectures provide a novel way of software development and operation. A dedicated component is responsible for composing the service by selecting the preferred building blocks, such as runtime flavors (defining the amount of resources to be assigned) and data stores, and the optimal grouping of constituent functions and libraries which are packaged into respective FaaS platform artifacts This approach can be extended to edge cloud infrastructures. 1) We propose a novel system on top of public cloud platforms extended with edge resources which can dynamically optimize and deploy applications, following the microservice software architecture, based on live performance measurements. We target AWS and its edge extension for IoT applications, called Greengrass, the concept is general and it can be applied to other public cloud environments as well.

AND RELATED WORK
Serverless on Amazon Web Services
Serverless at the Edge With AWS IoT Greengrass
Automated Serverless Deployment and Optimization
TARGETED USE CASE
Design Goals
High Level Architecture and Operation
Performance of AWS Greengrass
Service Model
Platform Model
Cost and Latency Models
Optimization Problem
PROPOSED SYSTEM
Layout and Placement Optimizer
Serverless Deployment Engine
Automated Monitoring
Dynamic Reoptimization
EVALUATION
Reoptimization via the Steady State Control Loop
Dynamic Runtime Reconfiguration
VIII. CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call