Abstract

Augmented Reality (AR) had been used in mechanical assembly supporting. The current reported researches of AR assembly supporting system (ARASS) less concerned about the interaction between the operators' hands and virtual objects, especially on the occlusion between hands and virtual objects. The correct occlusion results would make ARASS more immersive and then the operators' interaction experiences were increased. To address this issue, this paper presented a bared-hand depth perception method that was designed for improving interaction experience in ARASS. The method was based on an interaction scene that was designed for AR mechanical assembly supporting. The method consisted of two components: hand segmentation and hand mesh generation. Hand segmentation method segmented operator's hand areas from depth image of scene and divided hand area into several sub areas that provided hand information. Hand surface mesh generation method generated hand surface meshes in 3D space based on the results of hand segmentation. The meshes were used to solve the occlusion problem of hand area in AR scene. The results verified that the bared-hand depth perception method could handle the occlusion between operator's hands and virtual objects correctly in real-time and recognize hand information in a limited space. The method could increase operators' depth perception of bared hand and make ARASS more immersive.

Highlights

  • Mechanical assembly was the process of joining mechanical parts or components according to technical requirements of the design, and the mechanical parts or components were combined into a machine

  • WORK In this paper, we presented a bare-hand depth perception method that was used in AR assembly supporting (ARAS)

  • It was implemented on an interaction scene designed for ARAS

Read more

Summary

INTRODUCTION

Mechanical assembly was the process of joining mechanical parts or components according to technical requirements of the design, and the mechanical parts or components were combined into a machine. Correct occlusion made ARASS display more immersive To address this issue, this research presented a bared-hand depth perception method designed for ARAS. This research presented a bared-hand depth perception method designed for ARAS It could output correct occlusion between operators’ bare hands and virtual parts. Bleser et al [14] developed a series of methods for learning assembly workflows from expert operators and teaching novice users by the learnt workflow models They used a series of wearable sensors and cameras to monitor expert operators’ workflows and parts’ movements. Operator could interact with the top view of the scene It let operator’s observation mode close to mechanical assembly. This scene provided an interaction space which larger than desktop space. Operator must enter scene in top of camera screen. Only one operator could enter scene. Any real objects could not exist in scene except operator

BODY AREA SEGMENTATION
HAND AREA SEGMENTATION
OCCULUSION BEWTEEN HANDS AND VIRTUAL OBJECTS
GENERATION OF HAND SURFACE MESH
CONCLUSION AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call