Abstract
During human–machine collaboration in manufacturing activities, it is important to provide real-time annotations in the three-dimensional workspace for local workers who may lack relevant experience and knowledge. For example, in MR assembly, workers need to be alerted to avoid entering hazardous areas when manually replacing components. Recently, many researchers have explored various visual cues for expressing physical task progress information in the MR interface of intelligent systems. However, the relationship between the implantation of visual cues and the balance of interface cognition has not been well revealed, especially in tasks that require annotating hazardous areas in complex operational terrains. In this study, we developed a novel MR interface for an intelligent assembly system that supports local scene sharing based on dynamic 3D reconstruction, remote expert behavior intention recognition based on deep learning, and local personnel operational behavior visual feedback based on external bounding box. We compared the encoding results of the proposed MR interface with 3D annotations combined with 3D sketch cues (3DS), which combines 3D spatial cues (3DSC) and 3DS combined with adaptive cues (AVC), through a case study. We found that for physical tasks that require specific area annotations, 3D annotations with context (3DAC) can better improve the quality of manual work and regulate the cognitive load distribution of the MR interface more reasonably.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.