Abstract

Cloud manufacturing provides a cloud platform to offer on-demand services to complete consumers’ tasks, but assigning tasks to enterprises with different services requires many-to-many scheduling. The dynamic cloud environment puts forward higher requirements on scheduling algorithms’ real-time response and generalizability. Additionally, complex manufacturing tasks with flexible processing sequences also increase the decision-making difficulty. The existing approaches either have difficulty meeting the requirements of dynamics and fast-respond or struggle to effectively capture features of tasks with flexible processing sequences. To address these limitations, we develop a novel scheduling algorithm to solve a dynamic scheduling problem in the group service cloud manufacturing environment. Our proposal is formulated and trained by multi-agent reinforcement learning. The graph convolution network encodes tasks’ graph-structure features, and the recurrent neural network records each task’s processing trajectories. We independently design the action space and the reward function and train the algorithm with a mixing network under the centralized training decentralized execution architecture. Multi-agent reinforcement learning and graph convolution networks are rarely used to cloud manufacturing scheduling problems. Contrast experiments on a case study indicate that our proposal outperforms the other six multi-agent reinforcement learning-based scheduling algorithms in terms of scheduling performance and generalizability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call