Indoor scene reconstruction from LiDAR point clouds is crucial for indoor navigation, scan-to-BIM, and virtual reality. However, occlusions caused by furniture and clutter make this task challenging. This paper introduces a novel framework for reconstructing 3D models using typically unoccluded ceiling point clouds, because the ceiling is difficult to be obstructed by indoor objects and is relatively well preserved. This framework starts with extracting the roof, roof is extracted by a proposed method based on cloth simulation which can extract various shape of roofs. Then a room instance map of the scene is obtained by input the point cloud density map to Mask R-CNN. By overlapping the roof point cloud and room instance map, the whole roof is segmented into roof points of different rooms. Next, we extracted the contours of every room's roof and a new algorithm was proposed to detect connections between different rooms to restore the topological relationship between rooms, generating the scene's floor plan. Finally, by detecting the contours and connections' height, the floor plan is stretched to detected height to generate a 3D model. To evaluate the proposed method's effectiveness, experiments were conducted on two indoor LiDAR point cloud datasets, GibLayout and ISPRS Benchmark, comprising a total of 13 scenes. The results were compared with representative methods, including Floor-sp and Heat for floor plan extraction, and Polyfit and KSR for 3D model reconstruction. The experimental results show that the new method outperform existing approaches, enabling the reconstruction of highly overlapping 3D models of real scenes.
Read full abstract