Abstract

ABSTRACT Herein, we present a real-time multi-agent deep reinforcement learning model as a disassembly planning framework for human–robot collaboration. This disassembly plan optimises sequences to minimise operation time and the disassembling costs of end-of-life (EoL) products. Combining different data-driven decision-making tools, the plan aims to handle the complexities and uncertainties of disassembly tasks. Based on the physical features and geometric limitations of EoL product components, we calculate product disassembly difficulty scores. Subsequently, the deep reinforcement learning model integrates these scores into planning process. The model allocates tasks in real time according to the online conditions of the human operator, cobot, and product, enabling the model to cope with uncertainties that may change the process routine. We also present different scenarios wherein a cobot collaborates with human operators with different skill levels. To evaluate model performance, we compare it with baseline models in terms of the convergence time and incorporated disassembly features. The analysis indicates that our model converges three times faster than a baseline model applied to the same case study. Moreover, our model includes more features of the disassembly problem in its decision-making process than any other baseline model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call