Abstract

In recent years, computing workloads have shifted from the cloud to the fog, and IoT devices are becoming powerful enough to run containerized services. While the combination of IoT devices and fog computing has many advantages, such as increased efficiency, reduced network traffic and better end user experience, the scale and volatility of the fog and edge also present new problems for service deployment scheduling.Fog and edge networks contain orders of magnitude more devices than cloud data centers, and they are often less stable and slower. Additionally, frequent changes in network topology and the number of connected devices are the norm in edge networks, rather than the exception as in cloud data centers.This article presents a service scheduling algorithm, labeled “Swirly”, for fog and edge networks containing hundreds of thousands of devices, which is capable of incorporating changes in network conditions and connected devices. The theoretical performance is explored, and a model of the behaviour and limits of fog nodes is constructed. An evaluation of Swirly is performed, showing that it is capable of managing service meshes for at least 300.000 devices in near real-time.

Highlights

  • In recent years, the rise of technologies such as containers [1] and more recently unikernels [2] has triggered a wave of research into edge and cloud offloading

  • While the combination of IoT and fog computing offers a wide array of advantages, such as improvements in efficiency and user experience, it exacerbates some of the service deployment scheduling challenges already present in the cloud, such as taking network bandwidth, network reliability and distances between nodes into account

  • In the introduction, a number of requirements are presented for a useful large-scale fog service scheduler

Read more

Summary

Introduction

The rise of technologies such as containers [1] and more recently unikernels [2] has triggered a wave of research into edge and cloud offloading This has resulted in a move from purely cloud-centered service deployments to fog computing and edge computing [3, 4], in which services are deployed close to their consumers instead of in monolithic data centers. Instead of being located in centralized data centers, the fog and edge are spread over a large physical area, containing hundreds of thousands of devices This means that network grade and quality can vary by orders of magnitude, from DSL lines to fiber optics, while the distances involved result in much higher latencies between nodes than in cloud data centers. Any change in the fog topology can trigger migrations of or extra service instances, as can edge nodes coming online, going offline, or moving to a different location

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.