Abstract

In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is “it depends”. To find out the cases where it is more profitable to stay in the device (which is part of the fog) or to go to a different one (for example, a device in the cloud), we propose two models that intend to help the user evaluate the cost of performing a certain computation on the fog or sending all the data to be handled by the cloud. In our generic mathematical model, the user can define a cost type (e.g., number of instructions, execution time, energy consumption) and plug in values to analyze test cases. As filters have a very important role in the future of the Internet of Things and can be implemented as lightweight programs capable of running on resource-constrained devices, this kind of procedure is the main focus of our study. Furthermore, our visual model guides the user in their decision by aiding the visualization of the proposed linear equations and their slope, which allows them to find if either fog or cloud computing is more profitable for their specific scenario. We validated our models by analyzing four benchmark instances (two applications using two different sets of parameters each) being executed on five datasets. We use execution time and energy consumption as the cost types for this investigation.

Highlights

  • Current prospects indicate that the number of devices connected to the Internet of Things (IoT) will reach the mark of 75 billion within the decade [1], and this huge increase in scale will certainly bring new challenges with it

  • This article presents a general mathematical model and a visual model created to assist the analysis of the fog-cloud computing cost trade-off

  • In a way, looking for the answer to this question is akin to a search for cases where fog computing is more profitable than cloud computing

Read more

Summary

Introduction

Current prospects indicate that the number of devices connected to the Internet of Things (IoT) will reach the mark of 75 billion within the decade [1], and this huge increase in scale will certainly bring new challenges with it. Enabling computation to be performed near the data source has many advantages, such as allowing us to address issues related to transmission latency and network congestion It creates a window for new possibilities: filtering and discarding unnecessary information, analyzing readings in search of outliers that can be reported, combining readings from different sensors, and actual real-time response to local queries, among other applications. An analysis of the proposed models using two different data filters as applications, with test cases for two distinct instances of each program being executed on five datasets (four containing real-wold data and one with artificial data) To this end, we consider the metrics execution time and energy consumption;.

Related Work
Computation Offloading Schemes
Our Approach
Modeling Platforms
General Equations
Estimating f
Analyzing Test Cases
Choosing an Approach to Estimate f
Deciding between Fog and Cloud Considering Execution Time
Deciding between Fog and Cloud Considering Energy Consumption
Simulating Other Scenarios
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call