Abstract
Serverless computing promises to make cloud computing cheaper and easier to use. However, serverless platforms use coarse-grained scheduling which decreases efficiency and application performance. We propose a fine-grained application model for serverless applications, and use it to design a scheduler to improve application performance and efficiency. We model serverless applications as being composed of microtasks, each with its own unique resource requirements. Microtasks are easily identified via distinct application phases like initialize, read, and process. We provide evidence for the existence of microtasks by experimentally evaluating a serverless online game. We design a scheduler that separates microtasks with different CPU requirements into different queues so that the appropriate amount of CPU cores could be allocated to each queue based on the CPU requirements of the microtasks in that queue. We implement and evaluate the design in an application-level proofof-concept microtask-based scheduler and compare it to taskbased scheduling commonly used by serverless platforms. For a distributed sort application, the microtask-based scheduler decreases application makespan by 37% and the duration of I/O based application stages by 81%, compared to task-based scheduling. Our work suggests that there is potential in extracting and using microtask information from serverless applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.