Function as a Service has enabled a modular development, automated elasticity and cloud-native implementation of applications. The break down into functions can lead to a more fine grained performance profiling of application parts. Embedding this process into the function development lifecycle can give significant benefits such as information per development version, optimized function sizing as well as enhanced function annotation with relation to its resource usage needs, exploited by FaaS providers for optimally co-allocating function executions in an effort to minimize overheads from their concurrent execution. In this paper, a function performance pipeline is created in order to directly analyze the performance of developed functions. The pipeline applies function benchmarking through load generation, profiling of resource usage of that function and categorization of the function relatively to other previous functions run in the framework. A two-step load generation is applied for enhanced validity of trace collection. Information is collected from both the function-side performance view (FaaS platform wait time, initialization and execution duration) as well as the underlying container platform (function level execution traces for CPU, memory, filesystem and network usage). The categorization stage clusters available function trace metrics based on the acquired resource profiles and extracts low/medium/high centroids for each resource metric. These are then used to classify and annotate each new function at the relevant level of activity per resource metric. We have applied the function benchmarking and categorization on 4 example functions (fibonacci, list, sort, and fileRW ones), while testing the effect of the two-stage load generation. The latter portrays an improvement on the profiling acquisition ranging from 1.23x to 45x times across the collected resource metrics. Different function inputs are also applied in order to observe their outcome on the categorization process itself. Finally, 8 different function co-allocation scenarios are applied based on different function categories that indicate concurrency overheads ranging from 15 % to 552 % performance degradation depending on the type of category selected for co-placement.
Read full abstract