Abstract

The job shop scheduling problem, which involves the routing and sequencing of jobs in a job shop context, is a relevant subject in industrial engineering. Approaches based on Deep Reinforcement Learning (DRL) are very promising for dealing with the variability of real working conditions due to dynamic events such as the arrival of new jobs and machine failures. Discrete Event Simulation (DES) is essential for training and testing DRL approaches, which are based on the interaction of an intelligent agent and the production system. Nonetheless, there are numerous papers in the literature in which DRL techniques, developed to solve the Dynamic Flexible Job Shop Problem (DFJSP), have been implemented and evaluated in the absence of a simulation environment. In the paper, the limitations of these techniques are highlighted, and a numerical experiment that demonstrates their ineffectiveness is presented. Furthermore, in order to provide the scientific community with a simulation tool designed to be used in conjunction with DRL techniques, an agent-based discrete event simulator is also presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call