Effective task scheduling can significantly impact on performance, productivity, and profitability in many real-world settings, such as production lines, logistics, and transportation systems. Traditional approaches to task scheduling rely on heuristics or simple rule-based methods. However, with the emergence of machine learning and artificial intelligence, there is growing interest in using these methods to optimize task scheduling. In particular, reinforcement learning is a promising task scheduling approach, because it can learn from experience and adapt to changing conditions. One step often missed or neglected is choosing optimal algorithm parameters and different ways the environment could be implemented. The study analyzes the performance possibilities of task scheduling using reinforcement learning. The deep analysis allows to select highly efficient environment models and Q-learning parameters. Moreover, automatic selection based on optimization algorithms has been proposed. Regardless of the selected optimal parameters, the resilience to environmental changes seems poor. The deducted analysis motivated the Authors to develop a novel Hybrid Q-learning approach. It allows to provide superior efficiency regardless of the environmental parameters.
Read full abstract