Abstract
With the rapid development of the economy and technology, the rate of product replacement has accelerated, resulting in a large number of products being discarded. Disassembly is an important way to recycle waste products, which is also helpful to reduce manufacturing costs and environmental pollution. The combination of a single-row linear disassembly line and a U-shaped disassembly line presents distinctive advantages within various application scenarios. The Hybrid Disassembly Line Balancing Problem (HDLBP) that considers the requirement of multi-skilled workers is addressed in this paper. A mathematical model is established to maximize the recovery profit according to the characteristics of the proposed problem. To facilitate the search for optimal solution, a new strategy for agents in reinforcement learning to interact with complex and changeable environments in real-time is developed, and deep reinforcement learning is used to complete the distribution of multi-products and disassembly tasks. On this basis, we propose a Soft Actor–Critic (SAC) algorithm to effectively address this problem. Compared with the Deep Deterministic Policy Gradient (DDPG) algorithm, Advantage Actor–Critic (A2C) algorithm, and Proximal Policy Optimization (PPO) algorithm, the results show that the SAC can get the approximate optimal result on small-scale cases. The performance of SAC is also better than DDPG, PPO, and A2C in solving large-scale disassembly cases.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.