Abstract

We propose a benchmarking test suite to evaluate performance of cloud serverless platforms and an open source software tool to automate the test process. Additionally, we used this setup to compare the commercial offers of Amazon, Google, Microsoft, and IBM. The work builds on ideas and experiments reported in the literature that, nevertheless, did not offer a “standard” set of coherent and comprehensive tests, capable of analyzing diverse serverless platforms, across the same set of features. To that end, we defined seven tests that cover scalability (latency and throughput), the effect of allocated memory, the performance for CPU-bound cases, the impact of payload size, the influence of the programming language, resource management (e.g., reuse of containers), and overall platform overhead. We created a software tool to deploy the test code and collect metrics in a manner agnostic to the serverless platforms under study. At a time when serverless computing popularity is rising, this benchmarking suite and software test tool enable developers to take informed decisions on the suitability and performance of each provider’s serverless offer. Additionally, they also help to identify attributes in need for improvement in the existing platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call