Abstract

Containers are a form of software virtualization, rapidly becoming the de facto way of providing edge computing services. Research on container-based edge computing is plentiful, and this has been buoyed by the increasing demand for single digit, milliseconds latency computations. A container scheduler is part of the architecture that is used to manage and orchestrate multiple container-based applications on heterogenous computing nodes. The scheduler decides how incoming computing requests are allocated to containers, which edge nodes the containers are placed on, and where already deployed containers are migrated to. This paper aims to clarify the concept of container placement and migration in edge servers and the scheduling models that have been developed for this purpose. The study illuminates the frameworks and algorithms upon which the scheduling models are built. To convert the problem to one that can be solved using an algorithm, the container placement problem in mostly abstracted using multi-objective optimization models or graph network models. The scheduling algorithms are predominantly heuristic-based algorithms, which are able to arrive at sub-optimal solutions very quickly. There is paucity of container scheduling models that consider distributed edge computing tasks. Research in decentralized scheduling systems is gaining momentum and the future outlook is in scheduling containers for mobile edge nodes.

Highlights

  • The number of things connected to the internet is in constant growth due to ever increasing demand for automation, artificial intelligence, augmented reality, smart homes and cities, real-time analytics, gaming and a variety of other industrial- and consumer-based applications

  • This study focuses on the frameworks and algorithms upon which the scheduling models are built

  • Much research effort has been expended on the development of scheduling models for container placement and migration in edge computing

Read more

Summary

Introduction

The number of things connected to the internet is in constant growth due to ever increasing demand for automation, artificial intelligence, augmented reality, smart homes and cities, real-time analytics, gaming and a variety of other industrial- and consumer-based applications. The volume of data being generated and the frequency and complexity of computation are increasing, exerting pressure on cloud servers This pressure leads to high energy usage at datacenters [1] and contributes to a reduction in quality of service (QoS) such as dropped computations, high latency, high cost of bandwidth and overloaded cloud server. Rather than send a computation from end device to cloud or transmit a response from cloud to end device, the edge server does the computation and transmission, or a part of it [2] This service improves overall computation latency, minimizes the workload that is sent to the cloud, saves on bandwidth and enhances data privacy [3]

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call