Abstract

Kubernetes, an open-source container orchestration platform, has been widely adopted by cloud service providers (CSPs) for its advantages in simplifying container deployment, scalability, and scheduling. Networking is one of the central components of Kubernetes, providing connectivity between different Pods (a group of containers) both within the same host and across hosts. To bootstrap Kubernetes networking, the Container Network Interface (CNI) provides a unified interface for the interaction between container runtimes. There are several CNI implementations, available as open-source ‘CNI plugins’. While they differ in functionality and performance, it is a challenge for a cloud provider to differentiate and choose the appropriate plugin for their environment. In this article, we compare the various open-source CNI plugins available from the community, qualitatively, and through detailed quantitative measurements. With our experimental evaluation, we analyze the overheads and bottlenecks for each CNI plugin, especially because of the interaction with the datapath/iptables as well as the host network stack. Overlay tunnel offload support in the network interface card plays a significant role in achieving the good performance of CNIs that use overlay tunnels for inter-host Pod-to-Pod communication. We also study scalability with an increasing number of Pods, as well as with HTTP workloads, and briefly evaluate Pod startup latency. Our measurement results inform the outline of an ideal CNI environment for Kubernetes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call