Abstract

The cloud evolved into an attractive execution environment for parallel applications, which make use of compute resources to speed up the computation of large problems in science and industry. Whereas Infrastructure as a Service (IaaS) offerings have been commonly employed, more recently, serverless computing emerged as a novel cloud computing paradigm with the goal of freeing developers from resource management issues. However, as of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other and benefit from on-demand and elastic compute resources as well as per-function billing. In this work, we discuss how to employ serverless computing platforms to operate parallel applications. We specifically focus on the class of parallel task farming applications and introduce a novel approach to free developers from both parallelism and resource management issues. Our approach includes a proactive elasticity controller that adapts the physical parallelism per application run according to user-defined goals. Specifically, we show how to consider a user-defined execution time limit after which the result of the computation needs to be present while minimizing the associated monetary costs. To evaluate our concepts, we present a prototypical elastic parallel system architecture for self-tuning serverless task farming and implement two applications based on our framework. Moreover, we report on performance measurements for both applications as well as the prediction accuracy of the proposed proactive elasticity control mechanism and discuss our key findings.

Highlights

  • Serverless computing can be seen as a natural evolution of former cloud service models and is heavily influenced by microservices, container virtualization, and event-drivenParallel and Distributed Computing Group, Reutlingen University, Alteburgstr. 150, 72762 Reutlingen, GermanyAutonomous Learning Group, Max-Planck-Institute for Intelligent Systems, Max-Planck-Ring 4, 72076 Tubingen, Germany programming [53]

  • We argue that by following a skeleton-based approach, developers are relieved of parallelism and resource management issues while an elasticity controller is able to make use of elastic compute resources to automatically handle non-functional requirements related to parallel processing

  • Our main contributions are a proactive elasticity controller that employs non-linear regression techniques to control the number of Function as a Service (FaaS) functions per application run according to a user-defined execution time limit as well as a corresponding elastic parallel system architecture for self-tuning serverless task farming based on a serverless computing platform

Read more

Summary

Introduction

Serverless computing can be seen as a natural evolution of former cloud service models and is heavily influenced by microservices, container virtualization, and event-driven. Our main contributions are a proactive elasticity controller that employs non-linear regression techniques to control the number of FaaS functions per application run according to a user-defined execution time limit as well as a corresponding elastic parallel system architecture for self-tuning serverless task farming based on a serverless computing platform. We show how to construct and integrate a proactive elasticity controller that automatically adapts the number of processing units (in form of FaaS functions) per application run according to a user-defined execution time limit while minimizing the associated monetary costs of the computation. Parallel applications based on serverless skeletons provide two essential benefits: Simplified application development without considering resource management issues and specific insights into the structure of an application that can be exploited by an automated elasticity control mechanism This is the key difference of our approach and existing work, which is discussed in Sect. Redis is an in-memory data store that can be used as database, cache, or message broker

User and framework functions
Communication via backend services
Delivery and deployment
Numerical integration
Hyperparameter optimization
Constructing a prediction model
Serverless elastic parallel system architecture
Backend services
Parallel performance
Proactive elasticity control
Findings and discussion
Related work
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.