Efficient task scheduling in cloud and fog computing environments remains a significant challenge due to the diverse nature and critical processing requirements of tasks originating from heterogeneous devices. Traditional scheduling methods often struggle with high latency and inadequate processing times, especially in applications demanding strict computational efficiency. To address these challenges, this paper proposes an advanced fog-cloud integration approach utilizing a deep reinforcement learning-based task scheduler, DRLMOTS (Deep Reinforcement Learning based Multi Objective Task Scheduler in Cloud Fog Environment). This novel scheduler intelligently evaluates task characteristics, such as length and processing capacity, to dynamically allocate computation to either fog nodes or cloud resources. The methodology leverages a Deep Q-Learning Network model and includes extensive simulations using both randomized workloads and real-world Google Jobs Workloads. Comparative analysis demonstrates that DRLMOTS significantly outperforms existing baseline algorithms such as CNN, LSTM, and GGCN, achieving a substantial reduction in makespan by up to 26.80%, 18.84, and 13.83% and decreasing energy consumption by up to 39.60%, 30.29%, and 27.11%. Additionally, the proposed scheduler enhances fault tolerance, showcasing improvements of up to 221.89%, 17.05%, and 11.05% over conventional methods. These results validate the efficiency and robustness of DRLMOTS in optimizing task scheduling in fog-cloud environments.
Read full abstract