Abstract

In Function-as-a-Service (FaaS) clouds, customers deploy to cloud individual functions, in contrast to complete virtual machines (IaaS) or Linux containers (PaaS). FaaS offerings are available in the largest public clouds (Amazon Lambda, Google Cloud Functions, Azure Serverless); there are also popular open-source implementations (Apache OpenWhisk) with commercial offerings (Adobe I/O Runtime, IBM Cloud Functions). A recent addition to FaaS is the ability to compose functions: a function may call another functions, which, in turn, may call yet another function — forming a directed acyclic graph (DAG) of invocations. From the perspective of the infrastructure, a composed function is less opaque than a virtual machine or a container. We show that this additional information about the internal structure of the function enables the infrastructure provider to reduce the response latency. In particular, knowing the successors of a function in a DAG, the infrastructure can schedule these future invocations along with necessary preparation of environments.We model resource management in FaaS as a scheduling problem combining (1) sequencing of invocations; (2) deploying execution environments on machines; and (3) allocating invocations to deployed environments. For each aspect, we propose heuristics that employ FaaS-specific features. We explore their performance by simulation on a range of synthetic workloads and on workloads inspired by trace from existing system. Our results show that if the setup times are long compared to invocation times, algorithms that use information about the composition of functions consistently outperform greedy, myopic algorithms, leading to significant decrease in response latency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call