Abstract

Abstract With the continuous development of society, the speed of product upgrades is getting faster and faster, and the recycling of EOL products has great benefits to environmental protection and resource utilization. In recent years, with the rapid development of artificial intelligence (AI), more and more scholars have begun to apply reinforcement learning (RL) and deep learning (DL) to solve practical problems. This paper focuses on the application of deep reinforcement learning (DRL) in the multi-robotic disassembly line balance problem (MRDLBP). In the MRDLBP problem, for each workstation, the cycle time (CT) is determined, and the robot resources that can be accommodated are optional. Multi-objective includes minimizing workstation idle time, priority disassembly of high-demand components and minimizing energy consumption. The input model is single, and robotic resources are variable. Firstly, we formulated the mathematical model of the problem and proposed a framework for MRDLBP using DRL. In addition, we modeled the DRL system with three DRL algorithms, including deep Q network (DQN), double DQN (D_DQN) and prioritized experience replay DQN (PRDQN), to solve the problem. Finally, we build different test cases by adjusting the type of input model and the number of robot resources to test the performance of the three algorithms under different complexity conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call