Abstract

This research develops a novel computer vision approach named a spatial-channel hierarchical network (SCHNet), which is feasible to support the automated and reliable concrete crack segmentation at the pixel level. Specifically, SCHNet with a base net Visual Geometry Group 19 (VGG19) contains a self-attention mechanism, which is realized by three parallel modules, including the feature pyramid attention module, the spatial attention module, and the channel attention module. It can not only consider the semantic interdependencies in spatial and channel dimensions, but also adaptively integrate local features into their global dependencies. The segmentation performance is evaluated by a metric named Mean Intersection over Union (IoU) in a public dataset containing 11,000 cracked and non-cracked images with a unified resolution at 256 × 256 pixels (px). The experimental results confirm the effectiveness of the three attention modules, since they can individually increase Mean IoU by 1.62% (74.16%–72.54%), 5.15% (79.31%–74.16%), and 5.76% (79.92%–74.16%), respectively. With the help of new strategies like the data augmentation and multi-grid method, SCHNet can boost Mean IoU to 85.31%. In a comparison of the state-of-the-art models (i.e. U-net, DeepLab-v2, PSPNet, Ding, Dilated FCN) on the test dataset, SCHNet can outperform others with an improvement of at least 7.51% in Mean IoU. Moreover, SCHNet is robust to noises with a better generalization ability under various conditions, including shadows, roughness surfaces, and holes. Overall, this research contributes to developing SCHNet to integrate spatial and channel information in feature extraction, resulting in a more accurate and efficient crack detection process.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.