Automated high-precision crack detection on building structures under poor lighting conditions poses a significant challenge for traditional image-based methods. Overcoming this challenge is crucial to enhance the practical applicability of structural health monitoring and rapid damage assessment, especially in post-disaster scenarios like earthquakes. To address this challenge, this paper presents a deep learning-based three-dimensional crack detection method that utilizes light detection and ranging (LiDAR) point cloud data. Our method is specifically designed to address crack detection without relying on color information input, resulting in high-precision and robust apparent damage detection. The key contribution of this paper is the NL-3DCrack model, which enables automated three-dimensional crack semantic segmentation. This model comprises a feature embedding module, an incomplete neighbor feature extraction module, a decoder, and morphological filtering. Notably, we introduce an innovative incomplete neighbor mechanism to effectively mitigate the impact of outliers. To validate the effectiveness of our proposed method, we establish two three-dimensional crack detection datasets, namely the Luding dataset and the terrestrial laser scanner dataset, which are based on earthquake disasters. Experimental results demonstrate that our method achieves remarkable performance, with an intersection-over-union of 39.62% and 51.33% on the respective test sets, surpassing existing point cloud-based semantic segmentation models. Ablation experiments further confirm the effectiveness of our approach. In summary, our method showcases exceptional crack detection performance on LiDAR data using only XYZI channels. With its high precision and reliable results, it offers significant utility in real-world applications, contributing to improved structural health monitoring and rapid damage assessment after disasters, particularly in post-earthquake scenarios.
Read full abstract