Abstract

AbstractExisting tunnel detection methods include crack and water‐leakage segmentation networks. However, if the automated detection algorithm cannot process all defect cases, manual detection is required to eliminate potential risks. The existing intelligent detection methods lack a universal method that can accurately segment all types of defects, particularly when multiple defects are superimposed. To address this issue, a defect segmentation model is proposed based on Vision Transformer (ViT), which is completely different from the network structure of a convolutional neural network. The model proposes an adapter and a decoding head to improve the training effect of the transformer encoder, allowing it to be fitted to small‐scale datasets. In post‐processing, a method is proposed to quantify the threat level for the defects, with the aim of outputting qualitative results that simulate human observation. The model showed impressive results on a real‐world dataset containing 11,781 defect images collected from a real subway tunnel. The visualizing results proved that this method is effective and has uniform criteria for single, multiple, and comprehensive defects. Moreover, the tests proved that the proposed model has a significant advantage in the case of multiple‐defect superposition, and it achieved 93.77%, 88.36%, and 92.93% for mean accuracy (Acc), mean intersection over union, and mean F1‐score, respectively. With similar training parameters, the Acc of the proposed method is improved by more than 10% over the DeepLabv3+, Mask R‐convolutional neural network, and UPerNet‐R50 models and by more than 5% over the Swin Transformer and ViT‐Adapter. This study implemented a general method that can process all defect cases and output the threat evaluation results, thereby making more intelligent tunnel detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.