Abstract

Human–robot collaborative assembly has been one of the next-generation manufacturing paradigms in which superiorities of humans and robots can be fully leveraged. To enable robots effectively collaborate with humans, similar to human–human collaboration, robot learning from human demonstrations has been adopted to learn the assembly tasks. However, existing feature-based approaches require critical feature design and extraction process and are usually complex to incorporate task contexts. Existing learning-based approaches usually require a large amount of manual effort for data labeling and also rarely consider task contexts. This article proposes a dual-input deep learning approach to incorporate task contexts into the robot learning from human demonstration process to assist human in assembly. In addition, online automated data labeling during human demonstration is proposed to reduce the training efforts for learning. The experimental validations on a realistic human–robot model car assembly task with safety-concerned execution designs demonstrate the effectiveness and advantages of the proposed approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call