Abstract

Serverless computing is an emerging cloud technology which is preferred due to its quick response time and scale-to-zero features, where the code can run closer to the users to minimize the latency, and users are charged only for the actual amount of resource consumption. The unit of execution in serverless computing is the function that can independently run on a separate container. A container is required to configure and set up for each time a request is issued for function invocation. This requires some time before the actual execution start is known as a cold start. Cold-start delay and the frequency of cold-start highly affect the performance of serverless execution which needs to be addressed. Most of the researchers are trying to address the problem by keeping the container warm for a fixed period of time known as an idle container window, which may not be the right solution for mitigating cold-start delay. If the idle container window is too long it may lead to more resource consumption, which will contradict the scale to zero feature of serverless computing. On the other hand, if the window length is too short, it may not able to handle a large number of invocation requests. In this paper, an adaptive model is proposed to predict the length of idle container window and the requirement for a number of pre-warmed containers in advance. We have used a Deep neural network and LSTM model to capture the previous function invocation patterns, which can be used to pre-determine the length of the idle-container window. The effectiveness of the proposed model is extensively compared with other cold-start mitigation strategies followed in AWS Lambda, Microsoft Azure, Openfaas, and Openwhisk by using four real-world serverless applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call