Abstract

Fog computing has emerged as a computing paradigm for resource-restricted Internet of things (IoT) devices to support time-sensitive and computationally intensive applications. Offloading can be utilized to transfer resource-intensive tasks from resource-limited end devices to a resource-rich fog or cloud layer to reduce end-to-end latency and enhance the performance of the system. However, this advantage is still challenging to achieve in systems with a high request rate because it leads to long queues of tasks in fog nodes and reveals inefficiencies in terms of delays. In this regard, reinforcement learning (RL) is a well-known method for addressing such decision-making issues. However, in large-scale wireless networks, both action and state spaces are complex and extremely extensive. Consequently, reinforcement learning techniques may not be able to identify an efficient strategy within an acceptable time frame. Hence, deep reinforcement learning (DRL) was developed to integrate RL and deep learning (DL) to address this problem. This paper presents a systematic analysis of using RL or DRL algorithms to address offloading-related issues in fog computing. First, the taxonomy of fog computing offloading mechanisms based on RL and DRL algorithms was divided into three major categories: value-based, policy-based, and hybrid-based algorithms. These categories were then compared based on important features, including offloading problem formulation, utilized techniques, performance metrics, evaluation tools, case studies, their strengths and drawbacks, offloading directions, offloading mode, SDN-based architecture, and offloading decisions. Finally, the future research directions and open issues are discussed thoroughly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call