Abstract

As a new manufacturing mode, cloud manufacturing integrates distributed manufacturing resources and capabilities into services, providing services to consumers with manufacturing requirements. Assigning consumers’ tasks to services requires many-to-many scheduling. An effective scheduling algorithm can reduce production expenditure and increase processing efficiency. However, the cloud manufacturing environment takes the characteristics of dynamics, complexity, and diversity, making the task scheduling intractable. Deep reinforcement learning has been gradually applied to various scheduling problems and shows the potential of little hand-craft and non-trivial generalizability. This paper proposes a deep recurrent q-network-based scheduling algorithm to address task scheduling problems in cloud manufacturing. The environment is modeled as a partially observable Markov decision process. The long-short-term-based policy trained by reinforcement learning is utilized to choose service providers for each task in step. A case study of the automobile structure part processing indicates that our proposal outperforms deep q-network by 5.7% and proximal policy optimization by 7.7% on scheduling accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call