Abstract
The wide adoption of microservices architectures has introduced an unprecedented granularisation of computing that requires the coordinated execution of multiple containers with diverse lifetimes and with potentially different auto-scaling requirements. These applications are managed by means of container orchestration platforms and existing centralised approaches for auto-scaling face challenges when used for the timely adaptation of the elasticity required for the different application components. This paper studies the impact of integrating bio-inspired approaches for dynamic distributed auto-scaling on container orchestration platforms. With a focus on running self-managed containers, we compare alternative configuration options for the container life cycle. The performance of the proposed models is validated through simulations subjected to both synthetic and real-world workloads. Also, multiple scaling options are assessed with the purpose of identifying exceptional cases and improvement areas. Furthermore, a nontraditional metric for scaling measurement is introduced to substitute classic analytical approaches. We found out connections for two related worlds (biological systems and software container elasticity procedures) and we open a new research area in software containers that features potential self-guided container elasticity activities.
Highlights
The widespread adoption of Linux containers, and in particular Docker [1], as a mechanism for convenient application delivery has paved the way in the last years for the surge of the microservices architectural pattern [2] in which monolithic applications coded in a single programming language can be broken down into multiple polyglot services exposing interfaces
Considering the queue size (Qsize) column, the lowest size is achieved in experiment SCM2
The ratio Qsize/#containers, highlights a similar situation: the best ratio is obtained for SCM2, whereas SCM3 and ICM1 show similar response times and the worst is SCM1
Summary
The widespread adoption of Linux containers, and in particular Docker [1], as a mechanism for convenient application delivery has paved the way in the last years for the surge of the microservices architectural pattern [2] in which monolithic applications coded in a single programming language can be broken down into multiple polyglot services exposing interfaces These are typically delivered and executed as containers managed by a Container Orchestration Platform (COP) such as Kubernetes [3] or Apache Mesos [4]. Microservices applications lead to faster creation, operation and removal of computing entities (containers) when compared to using Virtual Machines This imposes a serious challenge for auto-scaling where more adaptable, precise and capable systems are required to manage the elasticity of large-scale fleets of containers belonging to multiple application architectures with dynamic elasticity requirements.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have