Abstract

Job shop scheduling problem (JSSP) is one of the well‐known NP‐hard combinatorial optimization problems (COPs) that aims to optimize the sequential assignment of finite machines to a set of jobs while adhering to specified problem constraints. Conventional solution approaches which include heuristic dispatching rules and evolutionary algorithms has been largely in use to solve JSSPs. Recently, the use of reinforcement learning (RL) has gained popularity for delivering better solution quality for JSSPs. In this research, we propose an end‐to‐end deep reinforcement learning (DRL) based scheduling model for solving the standard JSSP. Our DRL model uses attention‐based encoder of Transformer network to embed the JSSP environment represented as a disjunctive graph. We introduced Gate mechanism to modulate the flow of learnt features by preventing noise features from propagating across the network to enrich the representations of nodes of the disjunctive graph. In addition, we designed a novel Gate‐based graph pooling mechanism that preferentially constructs the graph embedding. A simple multi‐layer perceptron (MLP) based action selection network is used for sequentially generating optimal schedules. The model is trained using proximal policy optimization (PPO) algorithm which is built on actor critic (AC) framework. Experimental results show that our model outperforms existing heuristics and state of the art DRL based baselines on generated instances and well‐known public test benchmarks. © 2023 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call