Abstract

Mass customized production brings great uncertainty to the computer-aided process planning (CAPP). Current CAPP methods based on heuristic optimization assume in advance that manufacturing resources are static and make a deterministic plan that cannot cope with the uncertainty of the manufacture environment. As a promising method in solving complex and dynamic decision-making problems, deep reinforcement learning is employed in this paper for process planning, aiming at promoting the response speed by exploiting the reusability and expandability of past decision-making experiences. To simplify the decision procedure, two different types of decisions, operation sequencing and resource selection, are fused into one by integrating environment states and agent behaviors in a matrix manner. Then, a masking algorithm is developed to screen out currently inexecutable machining operations at each decision step and process planning datasets are generated for training and testing according to the actual processing logic. Next, the Monte Carlo method and the deep learning algorithm are utilized to evaluate and improve the process policy, respectively. Finally, the searching capability of the proposed method for both static and dynamic manufacturing resources are tested in case studies, and the results are discussed. It is shown that the proposed approach can solve the planning problem more efficiently compared with current optimization-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call