Abstract

This work establishes a set of methodologies to evaluate the performance of any task scheduling policy in heterogeneous computing contexts. We formally state a scheduling model for hybrid edge–cloud computing ecosystems and conduct simulation-based experiments on large workloads. In addition to the conventional cloud datacenters, we consider edge datacenters comprising smartphone and Raspberry Pi edge devices, which are battery powered. We define realistic capacities of the computational resources. Once a schedule is found, the various task demands can or cannot be fulfilled by the resource capacities. We build a scheduling and evaluation framework and measure typical scheduling metrics such as mean waiting time, mean turnaround time, makespan, throughput on the Round-Robin, Shortest Job First, Min-Min and Max-Min scheduling schemes. Our analysis and results show that the state-of-the-art independent task scheduling algorithms suffer from performance degradation in terms of significant task failures and nonoptimal resource utilization of datacenters in heterogeneous edge–cloud mediums in comparison to cloud-only mediums. In particular, for large sets of tasks, due to low battery or limited memory, more than 25% of tasks fail to execute for each scheduling scheme.

Highlights

  • Scheduling of execution workloads is critical to the performance of any computing system

  • Large-scale processing of heterogeneous big data in heterogeneous computing environments is prone to suffer from high latency due to factors such as nonoptimal job scheduling and resource allocation, as well as Internet of Things (IoT)

  • The problem we address in this paper is the evaluation of the scheduling algorithms Round-Robin (RR), Shortest Job First (SJF), Min-Min, and Max-Min for heterogeneous environments

Read more

Summary

Introduction

Scheduling of execution workloads is critical to the performance of any computing system. Large-scale processing of heterogeneous big data in heterogeneous computing environments is prone to suffer from high latency due to factors such as nonoptimal job scheduling and resource allocation, as well as Internet of Things (IoT). Considering the unprecedented amount of smart edge devices and IoT sensors currently, massive volumes of heterogeneous data from a variety of use cases are generated continuously [1]. This directly translates into an emerging need for latency-aware, energy-efficient computing techniques [2]. To test the performance of current techniques, it is important to have a framework for their accurate and correct evaluation

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.