Abstract

This paper proposes an architectural framework for the efficient orchestration of containers in cloud environments. It centres around resource scheduling and rescheduling policies as well as autoscaling algorithms that enable the creation of elastic virtual clusters. In this way, the proposed framework enables the sharing of a computing environment between differing client applications packaged in containers, including web services, offline analytics jobs, and backend pre-processing tasks. The devised resource management algorithms and policies will improve utilization of the available virtual resources to reduce operational cost for the provider while satisfying the resource needs of various types of applications. The proposed algorithms will take factors that are previously omitted by other solutions into consideration, including 1) the pricing models of the acquired resources, 2) and the fault-tolerability of the applications, and 3) the QoS requirements of the running applications, such as the latencies and throughputs of the web services and the deadline of the analytical and pre-processing jobs. The proposed solutions will be evaluated by developing a prototype platform based on one of the existing container orchestration platforms.

Highlights

  • Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better management and application deployment in addition to hardware virtualization [19][20]

  • Instead of running separate clusters of homogeneous containers, organizations prefer to run containers of different applications on a shared cluster, which drives the creation of modern container orchestration platforms, such as Kubernetes [3], Docker Swarm [4], and Apache Mesos [5]

  • In the context of Cloud, container orchestration platforms are usually deployed on virtual clusters acquired from a single or multiple Cloud data centers

Read more

Summary

Introduction

Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better management and application deployment in addition to hardware virtualization [19][20]. Instead of running separate clusters of homogeneous containers, organizations prefer to run containers of different applications on a shared cluster, which drives the creation of modern container orchestration platforms, such as Kubernetes [3], Docker Swarm [4], and Apache Mesos [5] These platforms provide efficient bin-packing algorithms to schedule containers on the shared clusters. To save operational cost in Cloud, it is essential to consolidate containers onto as fewer virtual machines as possible because the initial placement may deteriorate along with dynamisms caused by workload fluctuation, application launches and terminations, and pricing variations To realize this goal, the platform should be able to migrate the containers running on underutilized VMs to other VMs. Using the state-of-the-art techniques, container migration can be implemented in three steps: 1) checkpointing the target container (if necessary) [6], 2) killing it on the original host, and 3) resuming it on the target host, which is only applicable to stateless and fault-tolerant applications.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.