Abstract

With the widespread application of location-based services, the appropriate representation of indoor spaces and efficient indoor 3D reconstruction have become essential tasks. Due to the complexity and closeness of indoor spaces, it is difficult to develop a versatile solution for large-scale indoor 3D scene reconstruction. In this paper, an annotated hierarchical Structure-from-Motion (SfM) method is proposed for low-cost and efficient indoor 3D reconstruction using unordered images collected with widely available smartphone or consumer-level cameras. Although the reconstruction of indoor models is often compromised by the indoor complexity, we make use of the availability of complex semantic objects to classify the scenes and construct a hierarchical scene tree to recover the indoor space. Starting with the semantic annotation of the images, images that share the same object were detected and classified utilizing visual words and the support vector machine (SVM) algorithm. The SfM method was then applied to hierarchically recover the atomic 3D point cloud model of each object, with the semantic information from the images attached. Finally, an improved random sample consensus (RANSAC) generalized Procrustes analysis (RGPA) method was employed to register and optimize the partial models into a complete indoor scene. The proposed approach incorporates image classification in the hierarchical SfM based indoor reconstruction task, which explores the semantic propagation from images to points. It also reduces the computational complexity of the traditional SfM by avoiding exhausting pair-wise image matching. The applicability and accuracy of the proposed method was verified on two different image datasets collected with smartphone and consumer cameras. The results demonstrate that the proposed method is able to efficiently and robustly produce semantically and geometrically correct indoor 3D point models.

Highlights

  • Indoor 3D models deliver precise geometry and rich scene knowledge about indoor spaces, which have great potential in object tracking and interaction, scene understanding, virtual environment rendering, indoor localization and route planning, etc. [1,2,3]

  • Most of the current model acquisition technologies are based on light detection and ranging (LiDAR) surveys [5,6], Kinect depth cameras [7,8], or image-based approaches such as robot simultaneous localization and mapping (SLAM) [9]

  • Outdoor reconstruction systems can usually efficiently output a city-scale model from one sampling, for example, from long-range photographs taken by unmanned aerial vehicles (UAVs) or street images captured by moving survey vehicles

Read more

Summary

Introduction

Indoor 3D models deliver precise geometry and rich scene knowledge about indoor spaces, which have great potential in object tracking and interaction, scene understanding, virtual environment rendering, indoor localization and route planning, etc. [1,2,3]. SfM algorithms [14], which start with an image pair and expand to the whole scene by sequentially adding related cameras and scene points These incremental methods are limited by their computational efficiency, and they involve exhaustive pair-wise image matching and repeated bundle adjustment calculation. Based on the above observations, a novel semantically guided hierarchical SfM indoor reconstruction approach is proposed in this paper, which integrates image clustering, object segmentation, and 3D point model reconstruction into the same pipeline. The proposed method inherits the computational efficiency and robust properties of hierarchical SfM, with further improvements that incorporate image semantic information in the data partitioning and model reconstruction. The proposed method efficiently and robustly recovers a complete indoor point model with coarse level objects and annotations from image collections. The main contributions of the proposed method are as follows. (1) We present a low-cost and efficient indoor 3D reconstruction method using unordered images collected with widely available smart phones or consumer-level cameras, which alleviates the dependence on professional instruments and operation. (2) Unlike traditional SfM methods, we integrate image clustering, object segmentation (coarse-level), and 3D point model reconstruction into the same pipeline. (3) We perform the SfM in an annotated hierarchical manner, whereby the cluttered images are independently classified and reconstructed along a hierarchical scene tree, improving the computational efficiency while balancing the distribution of error. (4) We present a strategy to search for matching points while running the RGPA to align point clouds during atomic point cloud registration, which improves the efficiency and robustness of the registration process

Methodology
Object Oriented Partial Scene Reconstruction
Point Cloud Registration and Optimization
Experiments
Method
The semantically annotated models:
10. Comparison of hierarchical
Findings
Discussion
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.