Abstract

In the area of demand-driven manufacturing systems, one critical problem currently is how to dynamically and optimally schedule, i.e., allocate, actual jobs with different customer requirements in large-scale manufacturing systems in order to meet the various objectives. Scheduling experts have proposed several scheduling methods with acceptable performance for manufacturing systems based on their understanding of the systems' characteristics. However, the problem remains challenging due to the complicated composition of the multiple objectives of the system, the complex system dynamics, constraints, and the extremely high computational cost for large-scale manufacturing systems. In this paper, we apply a stochastic processing network that can capture the stochasticity and dynamics of discrete manufacturing systems. We then propose a data-driven, distributed reinforcement learning (DRL) method so that little information about the system dynamics is required, and the learning and search costs of a scheduling policy with high production performance in a processing system can be reduced, thus this method is capable of scaling to large-scale processing systems. In particular, we first use a stochastic processing network, i.e., a queueing model, to represent the production processes in a typical discrete manufacturing system so that it can be simulated. We then decompose the reinforcement learning into local processes. Each local processs agent can make decisions locally by assigning indices to jobs based on each job's real-time information (index policy). Because of this distributed learning characteristic and index policy, our approach is much more scalable and efficient than either centralized methods or traditional decentralized reinforcement learning methods. Based on our simulation, we find our approach can achieve higher production performance than other heuristics, past decentralized reinforcement learning methods, or centralized methods in the stochastic processing networks with different scales.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.