Abstract

Crack detection is vital to maintain the structural safety of in-service bridges, which is an increasing demand in the industrial community. The deep learning-based crack detection is an emerging method that provides a novel way to deal with this problem. Although, the deep neural networks (DNN) can learn to detect cracks themselves, they require large numbers of crack images to learn the features of cracks in the real world. Besides, except for the crack forms, there always exist all kinds of noise motifs that will disturb the correct detection of crack regions. The lack of crack sample images has been an obstacle for the improvement of deep learning-based crack detection method. This paper proposes a generative adversarial network (GAN) based method to establish a synthesized crack image dataset with pixel-wise annotations, which provides a novel way aside from the traditional data augmentation method. The Deep Convolutional GAN (DCGAN) model was adopted for the generation of synthesized crack annotations while the Pixel2Pixel model was used to generate the corresponding synthesized crack images. The generated crack annotations and crack images during the training epochs were demonstrated to show how the GANs learn to generate the synthesized images. Moreover, comparative study was conducted to validate the performance of the synthesized crack image dataset for training the crack detection DNN. The results showed the DNN trained by the synthesized images can achieve 74.34% of the MeanIoU that was reached by the same DNN model trained with real images. As for the way of using the synthesized and real crack images for training a crack detection DNN, the way that use the synthesized crack images for pre-training and then use the real images for fine-tuning is better than the way that directly mix the synthesized and real crack images for training. This work provides reference for the GAN-based establishment of crack image dataset and the evaluation of the image quality for training crack detection DNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call