Abstract

In real-life manufacturing systems, production management is often affected by urgent demands and unexpected interruptions, such as new job insertions, machine breakdowns and operator unavailability. In this context, agent-based techniques are useful and able to respond quickly to dynamic disturbances. The ability of agents to recognize their environment and make decisions can be further enhanced by deep reinforcement learning (DRL). This paper investigates a novel dynamic re-entrant hybrid flow shop scheduling problem (DRHFSP) considering worker fatigue and skill levels to minimize the total tardiness of all production tasks. An integrated architecture of DRL and MAS (DRL-MAS) is proposed for real-time scheduling in dynamic environments. Two DRL models are proposed for different sub-decisions, where a reward-shaping technique combining long-term and short-term returns is proposed for the job sequence and machine selection sub-decisions, and an attention-based network is proposed for the worker assignment sub-decision for efficient feature extraction and decision making. Numerical experiments and case studies demonstrate the superior performance of the proposed DRL models compared with existing scheduling strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call