Abstract
Containers provide a lightweight runtime environment for microservices applications while enabling better server utilization. Automatic optimal allocation of CPU pins to the containers serving specific workloads can help to minimize the completion time of jobs. Most of the existing state-of-the-art focused on building new efficient scheduling algorithms for placing the containers on the infrastructure, and the resources to the containers are allocated manually and statically. An automatic method to identify and allocate optimal CPU resources to the containers can help to improve the efficiency of the scheduling algorithms. In this article, we introduce a new deep learning-based approach to allocate optimal CPU resources to the containers automatically. Our approach uses the law of diminishing marginal returns to determine the optimal number of CPU pins for containers to gain maximum performance while maximizing the number of concurrent jobs. The proposed method is evaluated using real workloads on a Docker-based containerized infrastructure. The results demonstrate the effectiveness of the proposed solution in reducing the completion time of the jobs by 23% to 74% compared to commonly used static CPU allocation methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Network and Service Management
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.