Abstract Human–robot collaboration (HRC) has been identified as a highly promising paradigm for human-centric smart manufacturing in the context of Industry 5.0. In order to enhance both human well-being and robotic flexibility within HRC, numerous research efforts have been dedicated to the exploration of human body perception, but many of these studies have focused only on specific facets of human recognition, lacking a holistic perspective of the human operator. A novel approach to addressing this challenge is the construction of a human digital twin (HDT), which serves as a centralized digital representation of various human data for seamless integration into the cyber-physical production system. By leveraging HDT, performance and efficiency optimization can be further achieved in an HRC system. However, the implementation of visual perception-based HDT remains underreported, particularly within the HRC realm. To this end, this study proposes an exemplary vision-based HDT model for highly dynamic HRC applications. The model mainly consists of a convolutional neural network that can simultaneously model the hierarchical human status including 3D human posture, action intention, and ergonomic risk. Then, on the basis of the constructed HDT, a robotic motion planning strategy is further introduced with the aim of adaptively optimizing the robotic motion trajectory. Further experiments and case studies are conducted in an HRC scenario to demonstrate the effectiveness of our approach.
Read full abstract