Abstract

Serverless computing has sparked a massive interest in both the cloud service providers and their clientele in recent years. This model entails the shift of the entire matter of resource management of user applications to the service provider. In serverless systems, the provider is highly motivated to attain cost efficient usage of their infrastructure, given the granular billing modules involved. However, due to the dynamic and multi-tenant nature of the serverless workloads and systems, achieving efficient resource management while maintaining function performance is a challenging task. Rapid changes in demand levels for applications cause variations in actual resource usage patterns of function instances. This leads to performance variations in co-located functions which compete for similar resources, due to resource contentions. Most existing serverless scheduling works offer heuristic techniques for function scheduling, which are unable to capture the true dynamism in these systems caused by multi-tenancy and varying user request patterns. Further, they rarely consider the often contradicting dual objectives of achieving provider resource efficiency along with application performance. In this article, we propose a novel technique incorporating Deep Reinforcement Learning (DRL) to overcome the aforementioned challenges for function scheduling in a highly dynamic serverless computing environment with heterogeneous computing resources. We train and evaluate our model in a practical setting incorporating Kubeless, an open-source serverless framework, deployed on a 23-node Kubernetes cluster setup. Extensive experiments done on this testbed environment show promising results with improvements of up to 24% and 34% in terms of application response time and resource usage cost respectively, compared to baseline techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call