Abstract

In this work, we present a method that applies Deep Reinforcement Learning, an approximate dynamic programming procedure using deep neural networks, to the job shop scheduling problem (JSSP). The aim is to show that a greedy-like heuristic trained on a subset of problems, can effectively generalize to some extent to unseen instances, and be competitive compared to other methods. We model the JSSP as a Markov Decision Process and we exploit the efficacy of reinforcement learning to solve the problem. We adopt an actor-critic scheme based on policy gradients, specifically the Proximal Policy Gradient method, where the action taken by the agent is influenced by policy considerations on the state-value function. The procedures take into account the challenging nature of JSSP, where the state and the action space change for every instance and after each decision. To tackle this variability, we introduced a novel model based on two incident Long-Short Term Memory networks, followed by an encoding model, different in structure for both the actor and the critic. Experiments show the algorithm reaches good solutions in a short time, proving that is possible to generate new greedy heuristics just from learning-based methodologies. We compared our algorithms against several established heuristics, an adaptive method, a commercial solver based on branch and cut, and another approach based on Deep Reinforcement Learning, proving the validity of the proposed method in terms of time and makespan. The model can generalize, to some extent, to larger problems originating from a different distribution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call