Abstract

To develop vision systems for autonomous robotic disassembly, this paper presents a dual-loop implementation architecture that enables a robot vision system to learn from human vision in disassembly tasks. The architecture leverages human visual knowledge through a collaborative scheme named ‘learning-by-doing’. In the dual-loop implementation architecture, a human-robot collaborative disassembly loop containing autonomous perception, human-robot interaction and autonomous execution processes is established to address perceptual challenges in disassembly tasks by introducing human operators wearing augmented reality (AR) glasses, while a deep active learning loop is designed to use human visual knowledge to develop robot vision through autonomous perception, human-robot interaction and model learning processes. Considering uncertainties in the conditions of products at the end of their service life, an objective ‘informativeness’ matrix integrating the label information and regional information is designed for autonomous perception, and AR technology is utilised to improve the operational accuracy and efficiency of the human-robot interaction process. By sharing the autonomous perception and human-robot interaction processes, the two loops are simultaneously executed. To validate the capability of the proposed architecture, a screw removal task was studied. The experiments demonstrated the capability to accomplish challenging perceptual tasks and develop the perceptual ability of robots accurately, stably, and efficiently in disassembly processes. The results highlight the potential of learning by doing in developing robot vision towards autonomous robotic disassembly through collaborative human-machine vision systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.