Abstract

The continuous data-flow application in the IoT integrates the functions of fog, edge, and cloud computing. Its typical paradigm is the E-Health system. Like other IoT applications, the energy consumption optimization of IoT devices in continuous data-flow applications is a challenging problem. Since the anomalous nodes in the network will cause the increase of energy consumption, it is necessary to make continuous data flows bypass these nodes as much as possible. At present, the existing research work related to the performance of continuous data-flow is often optimized from system architecture design and deployment. In this paper, a mathematical programming method is proposed for the first time to optimize the runtime performance of continuous data flow applications. A lightweight anomaly detection method is proposed to evaluate the reliability of nodes. Then the node reliability is input into the optimization algorithm to estimate the task latency. The latency-aware energy consumption optimization for continuous data-flow is modeled as a mixed integer nonlinear programming problem. A block coordinate descend-based max-flow algorithm is proposed to solve this problem. Based on the real-life datasets, the numerical simulation is carried out. The simulation results show that the proposed strategy has better performance than the benchmark strategy.

Highlights

  • The promising big data applications based on IoT produced so much data [1,2], and it is impractical to transfer all these data to the data center for processing in real time

  • These applications have domain-specific tasks that are offloaded to Fog nodes or Multi-access Edge Computing (MEC) servers to execute all kinds of complicated computation, e.g., intelligent video acceleration, augmented reality (AR), etc

  • According to the study by Pereira et al [13], the E-Health Monitoring (EHM) ecosystem can be divided into Gate Way (GW), Network Service Capability Layers (NSCL), Data Processor (DP)

Read more

Summary

Introduction

The promising big data applications based on IoT produced so much data [1,2], and it is impractical to transfer all these data to the data center for processing in real time. To carry out the latency awareness for the CDF problem, we put forward a lightweight anomaly detection strategy This strategy only makes use of the cumulative historical latency data of fog/MEC nodes to discovery the anomalous nodes. This is a model that is composed of four level entities in IFEC computing This model is used to formulate an optimal problem that minimizes the energy consumption subject to the latency constraints and with the anomalous fog or MEC nodes existing in systems.

Motivation Scenario
System Model
Anomalous Nodes Discovery and Confidence Evaluation
F-Test
Put It All Together
Latency in IoT End Level
Latency in Fog and MEC Level
Problem Formulation
Block Coordinate Descent Based Multi-Flow Algorithm
Best-Effort Algorithm
Anomaly Detection Based Latency Awareness
Task Arrival Rate Analysis
Verification of the Proposed Algorithm
Background
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call