Abstract

With the emergence of delay-sensitive task completion, computational offloading becomes increasingly desirable due to the end-user's limitations in performing computation-intense applications. Interestingly, fog computing enables computational offloading for the end-users towards delay-sensitive task provisioning. In this paper, we study the computational offloading for the multiple tasks with various delay requirements for the end-users, initiated one task at a time in end-user side. In our scenario, the end-user offloads the task data to its primary fog node. However, due to the limited computing resources in fog nodes compared to the remote cloud server, it becomes a challenging issue to entirely process the task data at the primary fog node within the delay deadline imposed by the applications initialized by the end-users. In fact, the primary fog node is mainly responsible for deciding the amount of task data to be offloaded to the secondary fog node and/or remote cloud. Moreover, the computational resource allocation in term of CPU cycles to process each bit of the task data at fog node and transmission resource allocation between a fog node to the remote cloud are also important factors to be considered. We have formulated the above problem as a Quadratically Constraint Quadratic Programming (QCQP) and provided a solution. Our extensive simulation results demonstrate the effectiveness of the proposed offloading scheme under different delay deadlines and traffic intensity levels.

Highlights

  • With the emergence of ultra-reliable and low-latency communications [1]–[4], the latency and reliability-aware mission-critical applications are increasingly growing up

  • SIMULATION RESULTS This section evaluates the performance of the proposed solution for task offloading in multiple task delay sensitive fog networks with Monte Carlo simulations

  • We further compare the performance of proposed scheme with a baseline approach, called fixed resource allocation where the transmission resources are distributed over all the fog nodes and the fog node allocates an equal amount of CPU resources for each tasks

Read more

Summary

Introduction

With the emergence of ultra-reliable and low-latency communications (uRLLC) [1]–[4], the latency and reliability-aware mission-critical applications are increasingly growing up. At the same time, the enduser’s computational resources limit the user’s experience (e.g., latency and reliability) for the computational-intensive applications. The cloud computing has already proven its significance to process the computational-intensive tasks, the physical distance between the end-user and. Remote cloud data center and burden on fronthaul link are the major barrier for low-latency-aware applications. A. MOTIVATION For computational-intensive task processing in a fog computing scenario, the end-user offloads the data either partially or entirely to the nearby fog computing node(s). MOTIVATION For computational-intensive task processing in a fog computing scenario, the end-user offloads the data either partially or entirely to the nearby fog computing node(s) It would be an ideal solution if a single fog computing node (hereinafter referred to as fog nodes) is able to compute, process the task data and deliver the results for the tasks

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call