Abstract

Delivering cloud-like computing facilities at the network edge provides computing services with ultra-low-latency access, yielding highly responsive computing services to application requests. The concept of fog computing has emerged as a computing paradigm that adds layers of computing nodes between the edge and the cloud, also known as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">micro data centers</i> , <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">cloudlets</i> , or <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fog nodes</i> . Based on this premise, this article proposes a component-based service scheduler in a cloud-fog computing infrastructure comprising several layers of fog nodes between the edge and the cloud. The proposed scheduler aims to satisfy the application’s latency requirements by deciding which services components should be moved upwards in the fog-cloud hierarchy to alleviate computing workloads at the network edge. One communication-aware policy is introduced for resource allocation to enforce resource access prioritization among applications. We evaluate the proposal using the well-known iFogSim simulator. Results suggest that the proposed component-based scheduling algorithm can reduce average delays for application services with stricter latency requirements while still reducing the total network usage when applications exchange data between the components. Results have shown that our policy was able to, on average, reduce the overload impact on the network usage by approximately 11 percent compared to the best allocation policy in the literature while maintaining acceptable delays for latency-sensitive applications.

Highlights

  • C LOUD computing is a consolidated paradigm that leveraged the industry of the Internet of Things (IoT)

  • 1) We propose a communication-aware scheduling policy for a fog-cloud computing system, namely Communication Based & Edgewards (CB-E); 2) We consider a fog-cloud hierarchy topology in our experiment comprising three levels: two layers of cloudlets and a cloud layer; 3) We evaluate our proposed policy using two types of common component-based services for the IoT industry, such as a latency-sensitive online game and a delay-tolerant video surveillance network; 4) We evaluate and demonstrate the effectiveness of our proposed policy using the well-established Fog Computing Simulator known as iFogSim [13]

  • We shall discuss a comprehensive set of experiments to validate our proposed communication-aware scheduling policies for a fog-cloud computing environment: CommunicationBased & Edgewards (CB-E)

Read more

Summary

Introduction

C LOUD computing is a consolidated paradigm that leveraged the industry of the Internet of Things (IoT). Applications, which usually require a lot of computing capabilities to run, have been successfully performed “in the cloud” using offloading capabilities [1]. This practice is an effort that offloads ( the name) most of the application’s workload to a remote data center facility for intensive data processing. This action significantly improves the computing resources of IoT devices facing high computation demands [2]. As cloud data centers are ordinarily localized far away from the devices situated at the edge of the network, a degradation in the application’s quality of service is likely to occur, putting the user’s quality of experience at critical risk [5]

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.