Abstract
This paper deals with the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor. It is a challenging task as we consider the almost inevitable accumulation error issue in some previous sequential fusion methods and also the possible failure of surface tracking in a long sequence. Therefore, we propose a global non-rigid registration framework and tackle the drifting problem via an explicit loop closure. Our novel scheme starts with a fusion step to get multiple partial scans from the input sequence, followed by a pairwise non-rigid registration and loop detection step to obtain correspondences between neighboring partial pieces and those pieces that form a loop. Then, we perform a global registration procedure to align all those pieces together into a consistent canonical space as guided by those matches that we have established. Finally, our proposed model-update step helps fixing potential misalignments that still exist after the global registration. Both geometric and appearance constraints are enforced during our alignment; therefore, we are able to get the recovered model with accurate geometry as well as high fidelity color maps for the mesh. Experiments on both synthetic and various real datasets have demonstrated the capability of our approach to reconstruct complete and watertight deformable objects.
Highlights
There have been ways to accommodate deformable objects using color images to track the motion and reconstruct the 3D shapes [4,5,6]
We have evaluated the proposed approach on both a synthetic dataset and several real datasets of deformable objects captured with an RGB-D sensor
We have proposed a framework to reconstruct the 3D shape and appearance of the deformable objects under the dynamic scenario
Summary
There have been ways to accommodate deformable objects using color images to track the motion and reconstruct the 3D shapes [4,5,6]. Sensors 2018, 18, 886 propose deforming a pre-scanned model under the constraints of multiple images Those methods suffer from the ambiguities of appearance matching and the color variation caused by the illumination effects and view changes. The most recent state-of-art method using multiple cameras has been proposed in [11], which has exploited the temporal information to generate consistent models in time space. Those systems with multiple depth sensors have demonstrated impressive results on dynamic objects modeling. They are not portable and often require very precise calibration between those multiple sensors. This makes the 3D modeling with a single depth sensor more attractive
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have