Abstract

Manufacturing industry is experiencing a revolution in the creation and utilization of data, the abundance of industrial data creates a need for data-driven techniques to implement real-time production scheduling. However, existing dynamic scheduling techniques have been mainly developed to solve problems of invariable size, and are incapable of addressing the increasing volatility and complexity of practical production scheduling problems. To facilitate near real-time decision-making on the shop floor, we propose a deep multi-agent reinforcement learning-based approach to solve the dynamic job shop scheduling problem. Double deep Q-network algorithm, attached to decentralized scheduling agents, is used to learn the relationships between production information and scheduling objectives, and to make near real-time scheduling decisions. Proposed framework utilizes centralized training and decentralized execution scheme and parameter-sharing technique to tackle the non-stationary problem in the multi-agent reinforcement learning task. Several enhancements are also developed, including the novel state and action representation that can handle size-agnostic dynamic scheduling problems, a chronological joint-action framework to alleviate the credit-assignment difficulty, and knowledge-based reward-shaping techniques to encourage cooperation. Simulation study shows that the proposed architecture significantly improves the learning effectiveness, and delivers superior performance compared to existing scheduling strategies and state-of-the-art deep reinforcement learning-based dynamic scheduling approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call