Abstract

ABSTRACT The stochasticity and randomly changing nature of the production environment posed a significant challenge in developing real-time responsive scheduling solutions. Many previous scheduling solutions assumed static environments, user-anticipated, and hand-crafted dynamic scenarios. However, real-world production environment events are random and unpredictable. This study considers Job Shop Scheduling Problem (JSSP) as an iterative decision-making problem, and Deep Reinforcement Learning (DRL)-based solution is designed to address these challenges. A deep neural network is utilized for function approximation, and the input feature vectors are extracted iteratively to be used in the sequential decision-making process. The production states are expressed with randomly changing feature vectors of each job’s operations and the corresponding machines. This work proposes Double Deep Q Network (DDQN) methods to train the model. Results are evaluated on the renowned OR-Library benchmark problems. The evaluation result indicates that the proposed approach is comparative in benchmark problems, and the scheduling agent can get good results in unseen instances with an average of 94.86% of the scheduling score.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call