Abstract

This paper proposes a new deep learning-based mobile AR for intelligent task assistance by conducting 3D spatial mapping without pre-registration using AR markers, which can match virtual AR objects to their corresponding physical objects automatically and accurately using single snapshot-based RGB-D data. Firstly, the proposed approach applies a deep learning-based instance segmentation method to the snapshot-based RGB-D data to detect real object instances and to segment their surrounding regions in 3D point cloud data. Then, an iterative closest point (ICP) algorithm is used to perform a 3D spatial mapping between the segmented point cloud of the real object and its corresponding virtual model. Therefore, the virtual information can be seamlessly and automatically synchronized with its corresponding real object. To prove the effectiveness of the proposed method, we performed comparative experiments quantitatively and qualitatively, which evaluated the accuracy, basic task performance, and usability. Experimental results verify that the proposed deep learning-based 3D spatial mapping approach is more accurate and more suitable for mobile AR-based visualization and interaction than previous studies. We have also implemented several applications in actual working situations, which verifies the applicability and extensibility of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call