Abstract

The fast growth in the amount of connected devices with computing capabilities in the past years has enabled the emergence of a new computing layer at the Edge. Despite being resource-constrained if compared with cloud servers, they offer lower latencies than those achievable by Cloud computing. The combination of both Cloud and Edge computing paradigms can provide a suitable infrastructure for complex applications’ quality of service requirements that cannot easily be achieved with either of these paradigms alone. These requirements can be very different for each application, from achieving time sensitivity or assuring data privacy to storing and processing large amounts of data. Therefore, orchestrating these applications in the Cloud–Edge computing raises new challenges that need to be solved in order to fully take advantage of this layered infrastructure. This paper proposes an architecture that enables the dynamic orchestration of applications in the Cloud–Edge continuum. It focuses on the application’s quality of service by providing the scheduler with input that is commonly used by modern scheduling algorithms. The architecture uses a distributed scheduling approach that can be customized in a per-application basis, which ensures that it can scale properly even in setups with high number of nodes and complex scheduling algorithms. This architecture has been implemented on top of Kubernetes and evaluated in order to asses its viability to enable more complex scheduling algorithms that take into account the quality of service of applications.

Highlights

  • In the last decade, the Cloud computing paradigm has improved the computing capabilities of business applications in many domains while decreasing the total cost of ownership

  • Cloud applications consist of lightweight virtualization components that are deployed as a set of containers, improving space requirements and deployment times compared to virtual machines [2]. These containers are managed by Container Orchestration Engines (COE-s), such as Docker Swarm [3] or Kubernetes [4]

  • In order to do so, COE-s need to determine from the user input which containers need to be executed, decide the best target location for each of them, deploy that container to the selected node, execute it, and monitor its execution to re-deploy it in case of failure [5]

Read more

Summary

Introduction

The Cloud computing paradigm has improved the computing capabilities of business applications in many domains while decreasing the total cost of ownership. Typical Cloud computing clusters are made of powerful devices located in big datacenters, under proper temperature conditions, and with stable network and energy supplies Virtualization technologies empowering these devices enable third companies to pay for the computing resources as they need them, instead of hosting their own infrastructure [1]. In order to provide a better fit for these applications, the Edge and Fog computing paradigms have been developed These are two different paradigms, in the context of this work the distinction is irrelevant, and the term Edge computing will be used (except for the related work section, where the term used by the article authors will be used). Generic infrastructure and workload models in order to represent the Cloud–Edge continuum devices and Edge-oriented applications, respectively; Distributed multi-scheduler approach to handle scalability without partitioning the infrastructure; Per application customization of scheduling algorithms; Inter-component constraint awareness at the scheduling decision making.

Related Work
Distributed Scheduling Architecture for the Cloud–Edge Continuum
Architecture Components
System Modeling
Infrastructure Model
Workload Model
Distributed Scheduling
Application Scheduler Deployment Process
Application Component Deployment Process
Application Scheduler Algorithm
Implementation
Brief Overview of Kubernetes Architecture
Architecture Overhaul
Application Control Layer Nodes
Application and Component Controllers
Application Schedulers
SWIM-NSM Daemon
Evaluation
Infrastructure
Smoke Monitoring Application
Speed Profiling Application
Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.