Abstract

For digital-image-based bridge inspection tasks, images captured by camera-carrying unmanned aircraft vehicles (UAVs) usually contain both the region of interest (ROI) and the background. However, accurately detecting cracks in concrete surface images containing background information is challenging. To improve UAV-based bridge inspection, an image extraction and crack detection methodology is presented in this paper. First, a deep-learning-based semantic segmentation network RandLA-BridgeNet for large-scale bridge point clouds, which can facilitate 3D ROI extraction, is trained and tested. Second, an image ROI extraction method based on 3D-to-2D projection is presented to generate images containing only the ROI. Finally, a data-driven deep learning convolutional neural network (CNN) called the grid-based classification and box-based detection fusion model (GCBD) is utilized to identify cracks in the processed images. An experiment is conducted on highway bridge images to validate the presented methodology. The overall semantic segmentation and image ROI extraction accuracies are 97.0% and 98.9%, respectively. After ROI extraction, 47.9% of the grid cells, which represent background misrecognition, are filtered, greatly improving the crack identification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call