Abstract

Concrete bridges are susceptible to surface damage such as water-corrosion, spalling, and rebar-corrosion, which can pose serious safety risks, and to detect this is crucial to ensure their safety. Computer vision-based methods to detect surface damages in concrete bridges have seen great progress. However, mainstream methods can only locate surface damage but cannot obtain detailed information, such as the proportion of damaged pixels. To address these challenges, this paper propose TSCB-Net, a semantic segmentation model for surface damage detection in concrete bridges. TSCB-Net employs both Transformers and an encoder-decoder structure, offering a strong alternative to detect surface damage in concrete bridges. Its encoder uses improved baselines with pyramid vision transformer (PVTV2) to extract semantic information and spatial details of concrete bridge images, layer by layer, and to capture long-term dependence between features. The decoder incorporates a context information recovery (CIR) module in the final upsampling stage, using convolutional layers to realize a larger receptive field. This module effectively models context information for semantic segmentation and facilitates multi-scale surface damage detection of concrete bridges. In experiments, TSCB-Net achieves an overall mIoU of 75.84% and an mF1 of 85.54% when the number of parameters is 5.84 M, outperforming state-of-the-art models. TSCB-Net can quickly and accurately detect and identify surface bridge damage, which can help in the early detection of structural problems and the adoption of appropriate repair and protection measures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.