Abstract

AbstractWe present an interactive system helping reconstruct a 3D indoor environment and manipulate it using a moving 3D sensor. Our system takes the depth stream from a depth sensor and converts it into point clouds. After that, the pose of the 3D sensor is tracked in real-time by matching the current point cloud to those from all previous frames; in the next step, it is mapped to a unique world point cloud in order to build the 3D model of that environment. Tracking the 3D sensor in real-time helps automatically fill the holes from the model, where the previous frames has not covered, making the model complete. Once the 3D model of the environment is ready, our system allows us to either add more 3D objects on it or remove an existing one. Here, we propose two different 3D object segmentation methods, which is the core module of our object removal function, for evaluation: a K-means based algorithm for simple models, and a graph-based algorithm for complex models. The system is tested and evaluated on various indoor environments.Keywordsvirtual reality3D reconstructionKinectFusionLarge ScaleSegmentationGraph-cutsensorK-means

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call