Abstract

Distributed manufacturing can reduce the production cost through the cooperation among factories, and it has been an important trend in the industrial field. For the enterprises with daily delivered production tasks, the random job arrivals are regular. Thus, the Distributed Job-shop Scheduling Problem (DJSP) with random job arrivals is studied, and it is a typical case from the equipment manufacturing industry. The DJSP involves two coupled decision-making processes, job assigning and job sequencing, and the distributed and uncertain production environment requires the scheduling method to be more responsive and adaptive. Thus, a Deep Reinforcement Learning (DRL) based multi-agent method is explored, and it is composed of the assigning agent and the sequencing agent. Two Markov Decision Processes (MDPs) are formulated for the two agents respectively. In the MDP for the assigning agent, fourteen factory-and-job related features are extracted as the state features, seven composite assigning rules are designed as the candidate actions, and the reward depends on the total processing time of different factories. In the MDP of the sequencing agent, five machine-and-job related features are set as the state features, six sequencing rules make up the action space, and the change of the factory makespan is the reward. Besides, to enhance the learning ability of the agents, a Deep Q-Network (DQN) framework with variable threshold probability in the training stage is designed, which can balance the exploitation and exploration in the model training. The proposed multi-agent method’s effectiveness is proved by the independent utility test and the comparison test that are based on 1350 production instances, and its practical value in the actual production is implied by the case study from an automotive engine manufacturing company.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call