Abstract
Fog computing is a new computing paradigm that employs computation and network resources at the edge of a network to build small clouds, which perform as small data centers. In fog computing, lightweight virtualization (e.g., containers) has been widely used to achieve low overhead for performance-limited fog devices such as WiFi access points (APs) and set-top boxes. Unfortunately, containers have a weakness in the control of network bandwidth for outbound traffic, which poses a challenge to fog computing. Existing solutions for containers fail to achieve desirable network bandwidth control, which causes bandwidth-sensitive applications to suffer unacceptable network performance. In this paper, we propose qCon, which is a QoS-aware network resource management framework for containers to limit the rate of outbound traffic in fog computing. qCon aims to provide both proportional share scheduling and bandwidth shaping to satisfy various performance demands from containers while implementing a lightweight framework. For this purpose, qCon supports the following three scheduling policies that can be applied to containers simultaneously: proportional share scheduling, minimum bandwidth reservation, and maximum bandwidth limitation. For a lightweight implementation, qCon develops its own scheduling framework on the Linux bridge by interposing qCon’s scheduling interface on the frame processing function of the bridge. To show qCon’s effectiveness in a real fog computing environment, we implement qCon in a Docker container infrastructure on a performance-limited fog device—a Raspberry Pi 3 Model B board.
Highlights
Centralized cloud computing platforms such as Amazon EC2 and the Google Cloud Platform have become a prevalent approach to collect and process massive Internet of Things (IoT) data generated by countless sensors, micro-cameras, and smart-objects; in the literature, when a cloud infrastructure is constructed on the core of the network, the cloud is regarded as centralized [1,2]
QCon adjusts the performance of containers when the obtained performance calculated by proportional share scheduling does not meet the minimum reservation or maximum limitation performance
This is because minimum bandwidth reservation and maximum bandwidth limitation have a higher priority than proportional share scheduling
Summary
Centralized cloud computing platforms such as Amazon EC2 and the Google Cloud Platform have become a prevalent approach to collect and process massive Internet of Things (IoT) data generated by countless sensors, micro-cameras, and smart-objects; in the literature, when a cloud infrastructure is constructed on the core of the network, the cloud is regarded as centralized [1,2]. As the computation and storage resources are remote from IoT devices, end users suffer low bandwidth, high network latency, and deficient responsiveness. This limitation eventually leads to a poor experience for end users utilizing traditional cloud services. Recent fog computing architectures overcome the limitation of traditional clouds by placing computation resources near IoT devices [4,5]. Fog computing exploits computation and network resources at the edge of a network to build small clouds, which perform as small data centers. Containers have been used as a fog platform instead of VMs because containers provide low overhead for performance-limited fog devices such as network gateways, routers, and WiFi access points (APs). Different namespaces can be used for isolating container IDs, network interfaces, interprocess communication, and mount-points. cgroups limit and account for each container’s resource usage, including the CPU, memory, and I/O devices
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have