Abstract

A large number of newly built infrastructures as well as those constructed in the early stage are faced with the problems of detection and maintenance. However, it is difficult to detect building cracks because of its small size and complex background noise. In this study, a crack segmentation network based on Encoder-Crossor-Decoder structure is innovatively proposed to solve the problems of small cracks and easy to be disturbed by background. Then, a loss function is proposed to address the problem of large differences in the ratio of cracks to background pixels in architectural crack segmentation. The experiments show that the loss function can effectively improve the training effect of the model and make the model obtain better semantic segmentation ability. Finally, according to the requirements of building crack detection, a large dataset of concrete pavement cracks is produced, which fills the gap of large dataset of semantic segmentation of cracks. The excellent effect of the model and loss function is verified with three datasets containing most of the major material and structural scenes. In addition, we compare the model with other deep learning segmentation models to validate its effectiveness. The results show that the mIoU of the model of this study reaches 84.04%, 77.56% and 87.38% in the bridge non-steel crack dataset, steel surface crack dataset and our concrete crack dataset, respectively. The accuracy reaches 99.14%, 98.62% and 99.37%. F1 reaches 0.911, 0.873 and 0.963 respectively. It outperforms other deep learning based segmentation methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.