Abstract

Human-robot collaboration (HRC) plays a crucial role in agile, flexible, and human-centric manufacturing towards the mass personalization transition. Nevertheless, in today’s HRC tasks, either humans or robots need to follow the partners’ commands and instructions along collaborative activities progressing, instead of proactive, mutual engagement. The non-semantic perception of HRC scenarios impedes mutually needed, proactive planning and high-cognitive capabilities in existing HRC systems. To overcome the bottleneck, this research explores a dynamic scene graph-based method for mutual-cognition generation in Proactive HRC applications. Firstly, a spatial-attention object detector is utilized to dynamically perceive objects in industrial settings. Secondly, a linking prediction module is leveraged to construct HRC scene graphs. An attentional graph convolutional network (GCN) is utilized to capture relations between industrial parts, human operators, and robot operations and reason structural connections of human-robot collaborative processing as graph embedding, which links to mutual planners for human operation supports and robot proactive instructions. Lastly, the Proactive HRC implementation is demonstrated on disassembly tasks of aging electronic vehicle batteries (EVBs) and evaluate its mutual-cognition capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call