Abstract

We present a novel real-time framework for non-rigid 3D reconstruction that is robust to noise, camera poses, and large deformation from a single depth camera. KinectFusion has achieved high-quality 3D object reconstructions in real-time by implicitly representing an object’s surface with a signed distance field (SDF) representation from a single depth camera. Many studies for incremental reconstruction have been presented since then, with the surface estimation improving over time. Previous works primarily focused on improving conventional SDF matching and deformation schemes. In contrast to these works, the proposed framework tackles the problem of temporal inconsistency caused by SDF approximation and fusion to manipulate SDFs and reconstruct a target more accurately over time. In our reconstruction pipeline, we introduce a refinement evolution method, where an erroneous SDF from a depth sensor is recovered more accurately in a few iterations by propagating erroneous SDF values from the surface. Reliable gradients of refined SDFs enable more accurate non-rigid tracking of a target object. Furthermore, we propose a level-set evolution for SDF fusion, enabling SDFs to be manipulated stably in the reconstruction pipeline over time. The proposed methods are fully parallelizable and can be executed in real-time. Qualitative and quantitative evaluations show that incorporating the refinement and fusion methods into the reconstruction pipeline improves 3D reconstruction accuracy and temporal reliability by avoiding cumulative errors over time. Evaluation results show that our pipeline results in more accurate reconstruction that is robust to noise and large motions, as well as outperforms previous state-of-the-art reconstruction methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call