Abstract
Large volumes of 3D parametric datasets, such as building information modeling (BIM), are the foundation for developing and applying smart city and digital twin technologies. Those datasets are also considered valuable tools for efficiently managing rebuilt structures during the operation and maintenance stages. Nevertheless, current approaches developed for the scan-to-BIM process rely on manual or semi-automatic procedures and insufficiently leverage semantic data in point clouds. These methods struggle to accurately represent large-scale indoor complex layouts and extract details from irregular-shaped unstructured elements, causing inefficiencies in BIM model generation. To address these issues, we propose an innovative scan-to-BIM framework based on deep learning algorithms and raw point cloud data, enabling the automatic generation of 3D models for both structured and unstructured indoor elements. Initially, we propose an enhanced deep learning neural network to improve the point clouds' semantic segmentation accuracy. Subsequently, an efficient workflow is developed to reconstruct 3D building models of structured indoor scenes. The proposed workflow can reconstruct large-scale data with multiple room layouts of Manhattan or non-Manhattan structures and reconstruct 3D models automatically by using a BIM parametric algorithm implemented in Revit software. Moreover, we introduce a robust method for unstructured elements to automatically generate corresponding 3D BIM models, even when the incorporating semantic information is incomplete. The proposed approach was evaluated on synthetic and real data for different scales and complexities of indoor scenes. The results of the experiments demonstrate that the improved model significantly enhances the overall semantic segmentation accuracy compared to the baseline models. The proposed scan-to-BIM framework is efficient for indoor element 3D reconstruction, achieving precision, recall, and F-score values ranging from 96% to 99%. The generated BIM models are competitive with traditional methods regarding model completeness and geometric accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.