Abstract

The application of Augmented Reality (AR) can effectively reduce the cognitive load of operators by presenting complex assembly work instruction with virtual augmented contents. If the virtual augmented contents superimposed on physical product without handling occlusion, the distortion may appear in AR guidance video. This paper proposes a monocular image based real-time occlusion handling method for virtual part model and physical product. In AR assistance assembly process, the virtual part model can be occluded by assembly scene or assembly objects. First, the monocular SLAM is used to reconstruct the assembly scene, and the reconstructed sparse 3D points are converted into depth points in the depth map. We use the color information of the assembly scene to control the propagation of parse depth points to the remaining pixels. With GPU acceleration calculation, a densification depth map of the assembly scene can be obtained. Then, the 3D models of the assembly objects are rendered as depth map according to the registration coordinates in AR assistance assembly system. The depth map of the assembly scene and assembly objects are synthesized as the final depth map for AR occlusion handling. Finally, the depth relationship between real assembly scenes, assembly objects, and virtual augmented contents are compared to determine the display pixels for AR effect. The results show that the proposed occlusion handling method shows high accuracy and fast speed in comparison with conventional methods, which demonstrates that our method can effectively solve the problem of occlusion handing for AR assistance assembly systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call