Abstract

In the IoT-based systems, the fog computing allows the fog nodes to offload and process tasks requested from IoT-enabled devices in a distributed manner instead of the centralized cloud servers to reduce the response delay. However, achieving such a benefit is still challenging in the systems with high rate of requests, which imply long queues of tasks in the fog nodes, thus exposing probably an inefficiency in terms of latency to offload the tasks. In addition, a complicated heterogeneous degree in the fog environment introduces an additional issue that many of single fogs can not process heavy tasks due to lack of available resources or limited computing capabilities. Reinforcement learning is a rising component of machine learning, which provides intelligent decision making for agents to response effectively to the dynamics of environment. This vision implies a great potential of application of RL in the concept of fog computing regarding resource allocation for task offloading and execution to achieve the improved performance. This work presents an overview of RL applications to solve the resource allocation related problems in the fog computing environment. The open issues and challenges are explored and discussed for further study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call