Abstract
Augmented reality (AR) has increasingly been applied to benefit manual assembly tasks by conveying the paper-based instructions into visual guidance intuitively, alleviating the cognitive load while improving assembly efficiency. Nevertheless, most current AR assemblies focus on spatial alignment by superimposing a 3D virtual model on the real scene directly, ignoring the occlusion awareness between virtual guidance and the real scene. To this end, we proposed a bare-hand occlusion-aware interactive AR assembly method based on the monocular image, and a lightweight deep neural network is established to infer the depth relationship between the 3D virtual model and real scene including gesture manipulation, and thus the ambiguous AR instruction by inaccurate occlusion deduction would be prevented, leading to more realistic bare-hand interactive AR guidance for manual assembly. And then, a quantitative evaluation criterion is established to illustrate the gesture occlusion awareness performance during manual assembly operations. Finally, comprehensive experiments are carried out and the results illustrate that the bare-hand occlusion-aware system can alleviate the cognitive load for the operators within interactive AR assembly tasks, providing a more human-center intelligent assembly application.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.