Abstract

The distributed blocking flowshop scheduling problem (DBFSP) with new job insertions is studied. Rescheduling all remaining jobs after a dynamic event like a new job insertion is unreasonable to an actual distributed blocking flowshop production process. A deep reinforcement learning (DRL) algorithm is proposed to optimise the job selection model, and local modifications are made on the basis of the original scheduling plan when new jobs arrive. The objective is to minimise the total completion time deviation of all products so that all jobs can be finished on time to reduce the cost of storage. First, according to the definitions of the dynamic DBFSP problem, a DRL framework based on multi-agent deep deterministic policy gradient (MADDPG) is proposed. In this framework, a full schedule is generated by the variable neighbourhood descent algorithm before a dynamic event occurs. Meanwhile, all newly added jobs are reordered before the agents make decisions to select the one that needs to be scheduled most urgently. This study defines the observations, actions and reward calculation methods and applies centralised training and distributed execution in MADDPG. Finally, a comprehensive computational experiment is carried out to compare the proposed method with the closely related and well-performing methods. The results indicate that the proposed method can solve the dynamic DBFSP effectively and efficiently.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call