Abstract

This paper presents a fast and accurate method for real-time 3D reconstruction by using a depth camera and inertial sensor. Generally, the localization information of the camera is obtained from the depth data by using the ICP (Iterative Closest Point) algorithm. When the depth camera moves fast, ICP will converge to a bad local minimum which results in tracking failure. To prevent this case, a camera pose transformation matrix from the inertial sensor is taken as the initial value of ICP. By the Invariant EKF method, the two poses obtained from inertial sensor and ICP are fused to obtain a more accurate pose. As the massive points are input to the ICP, the large-scale parallel processors are used to accelerate the Invariant EKF operation. Furthermore, the inertial sensor help the system to determine whether the full recasting is performed or not when the 3D model is rendering. If the raycasting is skipped, an adaptive forward projection can be performed for the visualization. Experiments show the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call