Abstract

The Internet-of-Things (IoT) environments have hard real-time tasks that need execution within fixed deadlines. As IoT devices consist of a myriad of sensors, each task is composed of multiple interdependent subtasks. Toward this, the cloud and fog computing platforms have the potential of facilitating these IoT sensor nodes (SNs) in accommodating complex operations with minimum delay. To further reduce operational latencies, we breakdown the high-level tasks into smaller subtasks and form a directed acyclic task graph (DATG). Initially, the SNs offload their tasks to a nearby fog node (FN) based on a greedy choice. The greedy formulation helps in selecting the FN in linear time while avoiding combinatorial optimizations at the SN, which saves time as well as energy. IoT environments are highly dynamic, which mandates the need for adaptive solutions. At the chosen FN, depending on the dependencies on the DATGs, its corresponding deadlines, and the varying conditions of the other FNs, we propose an $\epsilon $ -greedy nonstationary multiarmed bandit-based scheme (D2CIT) for online task allocation among them. The online learning D2CIT scheme allows the FN to autonomously select a set of FNs for distributing the subtasks among themselves and executes the subtasks in parallel with minimum latency, energy, and resource usage. Simulation results show that D2CIT offers a reduction in latency by 17% compared to traditional fog computing schemes. Additionally, upon comparison with existing online learning-based task offloading solutions in fog environments, D2CIT offers an improved speedup of 59% due to the induced parallelism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call