Abstract

Crack is the external expression form of potential safety risks in bridge construction. Currently, automatic detection and segmentation of bridge cracks remains the top priority of civil engineers. With the development of image segmentation techniques based on convolutional neural networks, new opportunities emerge in bridge crack detection. Traditional bridge crack detection methods are vulnerable to complex background and small cracks, which is difficult to achieve effective segmentation. This study presents a bridge crack segmentation method based on a densely connected U-Net network (BC-DUnet) with a background elimination module and cross-attention mechanism. First, a dense connected feature extraction model (DCFEM) integrating the advantages of DenseNet is proposed, which can effectively enhance the main feature information of small cracks. Second, the background elimination module (BEM) is proposed, which can filter the excess information by assigning different weights to retain the main feature information of the crack. Finally, a cross-attention mechanism (CAM) is proposed to enhance the capture of long-term dependent information and further improve the pixel-level representation of the model. Finally, 98.18% of the Pixel Accuracy was obtained by comparing experiments with traditional networks such as FCN and Unet, and the IOU value was increased by 14.12% and 4.04% over FCN and Unet, respectively. In our non-traditional networks such as HU-ResNet and F U N-4s, SAM-DUnet has better and higher accuracy and generalization is not prone to overfitting. The BC-DUnet network proposed here can eliminate the influence of complex background on the segmentation accuracy of bridge cracks, improve the detection efficiency of bridge cracks, reduce the detection cost, and have practical application value.

Highlights

  • Considering bridge crack detection, cracks are affected by complex road conditions, including the size of pavement texture particles, marking edges, and other interference information, which will result in difficulty in segmentation and effective recognition

  • Pixel accuracy refers to the ratio of correctly labeled pixels to general pixels, mean pixel accuracy (MPA) is the mean of all the classes obtained from the ratio of correctly classified pixels in each class, and mean intersectionover-union (MIoU) indicates the proportion of intersection and union of the two sets: real and predicted values

  • Where the true positive value (TP) refers to the number of pixels correctly recognized as cracks, and false positive value (FP) indicates the number of pixels mistakenly recognized as cracks

Read more

Summary

Introduction

With the development of building defect detection technology, image processing technology has been increasingly applied to the recognition and classification of building defects. In early bridge crack recognition models, the attribute of cracks captured in the images was described by the traditional extraction of global low-level features such as texture, shape, and edge These models were affected by complex road conditions such as the size of pavement texture particles, road and bridge joints, and marking edges. Considering bridge crack detection, cracks are affected by complex road conditions, including the size of pavement texture particles, marking edges, and other interference information, which will result in difficulty in segmentation and effective recognition. Based on these features, BC-DUnet is proposed in this study, which can better achieve small target segmentation of fine cracks in bridges under complex backgrounds. The experimental results show that compared to deep neural networks, the proposed network shows significantly higher performance, accuracy, and versatility, making bridge detection and monitoring work efficient, inexpensive, and automated

Related studies
Data acquisition
BC-DUnet
Data preparation
Training methods
Xk pii
Ablation experiment
Comparison of the attention mechanism
Comparison with other methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call