ABSTRACTKubernetes is an open‐source container orchestration platform, offers a comprehensive suite of features for managing containerized applications effectively. These features encompass horizontal scaling, per‐node‐pool cluster scaling and automated resource request adjustments. This research endeavors to harness these capabilities to address the limitations experienced by fog servers in edge environments, particularly those arising from restricted network connectivity and scalability challenges. In this research paper, the primary focus is on Kubernetes role of enhancing scalability, providing a robust framework for managing containerized applications. The proposed approach involves creating a predefined number of pods and containers within a Kubernetes cluster, specifically designed to efficiently handle incoming requests while optimizing CPU and memory usage. This method implements a microservice architecture for the web tier, with separate pods for the front end, back end and database, ensuring modular and scalable design. All pods communicate and integrate through REST APIs, facilitating seamless interaction and data exchange between the services. When handling web requests, the approach enables and controls both internal and external networks, ensuring secure and efficient communication. The analysis then examines the CPU and memory utilization of the pods, as well as node bandwidth, to provide a comprehensive evaluation of container scalability and performance within the Kubernetes cluster. These findings effectively demonstrate Kubernetes' capability in managing container scalability and optimizing resource utilization, highlighting its efficiency and robustness in a microservice environment.
Read full abstract