Abstract

With the expansion of the internet of things (IoT) devices and their applications, the demand for executing complex and deadline-aware tasks is growing rapidly. Fog-enabled IoT architecture has evolved to accomplish these tasks at the fog layer. However, fog computing devices have limited power supply and computation resources compared to cloud devices. In delay-sensitive applications of fog-enabled IoT architecture, executing tasks with stringent deadlines while reducing the service latency and energy usage of fog resources is a difficult challenge. This paper presents an effective task scheduling strategy to allocate fog computing resources for IoT requests to meet the deadline of the requests and resource availability. Initially, the scheduling problem is formulated as mixed-integer nonlinear programming (MINLP) to reduce the energy consumption of the fog resources and service time of the tasks subject to the deadline and resource availability constraints. To address the high dimensionality issue of the tasks in a dynamic environment, a fuzzy-based reinforcement learning (FRL) mechanism is employed to reduce the service delay of the tasks and energy usage of the fog nodes. Initially, the tasks are prioritized using fuzzy logic. Then the prioritized tasks are scheduled using the on-policy reinforcement learning technique, which enhances the long-term reward compared to the Q-learning approach. Further, the evaluation outcomes reflect that the proposed task scheduling technique outperforms the existing algorithms with an improvement of up to 23% and 18% regarding service latency and energy consumption, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call