Within a Human-centric Human-Robot Collaboration (HHRC) system, monitoring, assessing, and optimizing for an operator’s well-being is essential to creating an efficient and comfortable working environment. Currently, monitoring systems are used for independent assessment of human factors. However, the rise of the Human Digital Twin (HDT) has provided the framework for synchronizing multiple operator well-being assessments to create a comprehensive understanding of the operator’s performance and health. Within manufacturing, an operator’s dynamic well-being can be attributed to their physical and cognitive fatigue across the assembly process. As such, we apply non-invasive video understanding techniques to extract relevant assembly process information for automatic physical fatigue assessment. Our novelty involves a video-based fatigue estimation method, in which the boundary-aware dual-stream MS-TCN combined with an LSTM is proposed to detect the operation type, operation repetitions, and the target arm performing each task in an assembly process video. The detected results are then input into our physical fatigue profile to automatically assess the operator’s localized physical fatigue impact. The assembly process of a real-world bookshelf is recorded and tested against, with our algorithm results showing superiority in operation segmentation and target arm detection as opposed to other recent action segmentation models. In addition, we integrate a cognitive fatigue assessment tool that captures operator physiological signals in real-time for body response detection caused by stress. This provides a more robust HDT of the operator for an HHRC system.
Read full abstract