AbstractTraditional methods of detecting concrete damage, such as manual inspection, are typically slow, labor‐intensive, and subjective. Integrating deep learning algorithms has automated this process, representing a significant advance in building damage inspection. Semantic segmentation, a technique within deep learning, is increasingly recognized for its capability to identify the location and shape of concrete damage accurately. Unlike basic deep learning approaches like classification and object detection, semantic segmentation not only recognizes damage but also delineates its boundaries, and has great potential in facilitating dimension measurement. However, multi‐level, high‐precision detection for various structural damage assessments remains an area requiring further research. This study created a database of reinforced concrete surfaces with pixel‐level, multi‐category semantic segmentation annotations for various levels of component damage, including cracks and spalling of concrete, exposure, buckling, fracture of reinforcing bars, etc. The performance of advanced deep learning segmentation algorithms, including U‐Net, DeepLab, K‐Net, and FastSCNN, was consequently trained and evaluated in detecting various types of concrete damage. All models show good accuracy of more than 98%, but lower F1 scores around 70%. U‐Net and K‐Net demonstrate relatively stable performance, indicating a certain degree of consistency and a high‐performance peak. In contrast, DeepLabv3's F1 score fluctuates significantly, suggesting the model may suffer from overfitting or other stability issues during training, resulting in a relatively low average performance. FastSCNN exhibits the highest potential, achieving the highest F1 score. In addition, this study also tested the performance of each model on new damage images and explored the impact of the dataset proportion ratio (training set vs. validation set), contributing to providing insights into each algorithm's suitability for various inspection scenarios. Finally, perspectives on current challenges and future directions in the field have also been given.
Read full abstract