Abstract

Industry 5.0 emphasises collaborative work between humans, advanced technology, and artificial intelligence robots, focusing on human-centric principles and integrating flexibility and sustainability to enhance workflows. The advancement of human-robot interaction services can significantly improve the operations efficiency of digital twin manufacturing cells. Motivated by the above background, a gesture-driven interaction architecture for digital twin manufacturing cells is proposed, including data acquisition layer, data processing layer, and application service layer. Secondly, deep learning algorithms are employed for recognising predefined gestures. An improved YOLOv5 algorithm is used to solve the problem of low accuracy in static gesture recognition; while a 3D-CNN-based multimodal data fusion algorithm is used to solve the problem of continuity, diversity, and dimensionality in dynamic gesture recognition. Ultimately, the prototype system is developed utilising Kinect 2.0 and Unity 3D, which involves linking the gesture recognition to the digital twin model, and linking the digital twin model to the physical manufacturing cells. This study is expected to provide theoretical and practical insights to empower human-robot interaction technology in manufacturing cells.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call