Abstract

Serverless computing has become increasingly popular recently due to its cost efficiency and flexibility. However, running serverless computing functions in the cloud can incur high end-to-end service latency and operational costs. Running them on edge servers may significantly reduce service latency but is limited by computing power and memory availability. Given the limitations of cloud and edge environments for performing serverless computing, this paper proposes a joint function warm-up and request routing scheme to perform serverless computing functions on edge and cloud collaboratively. The key idea of the new scheme is to maximize the hit ratio of serverless computing requests, thereby reducing the cold-start latency that dominates the overall serving latency. This scheme explicitly considers allocating server memory and operation budget for executing concurrent requests during the scheduling. The proposed scheme has been evaluated through extensive simulations. Its effectiveness has been proved by comparison with the upper-bound results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.