Abstract

The dynamic permutation flow shop scheduling problem (PFSP) is receiving increasing attention in recent years. To provide intelligent scheduling for the dynamic PFSP, we solved the dynamic PFSP with new job arrival using deep reinforcement learning (DRL). The mathematical model is established with the objective of minimizing the total tardiness cost of all jobs arriving at the system. The double deep Q network (DDQN) is adapted to solve the studied problem. A large range of instances is provided to train the DDQN-based scheduling agent. The training curve shows the DDQN-based scheduling agent learned to choose appropriate actions at rescheduling points during the training process. After training, the trained model is saved and used to compare with several well-known dispatching rules on a set of test instances. The comparison results show that our trained scheduling agent performs significantly better than these dispatching rules. Our work can provide intelligent decision-making of scheduling for a flow shop under a dynamic production environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.