Abstract

The performance of crack segmentation is influenced by complex scenes, including irregularly shaped cracks, complex image backgrounds, and limitations in acquiring global contextual information. To alleviate the influence of these factors, a dual-encoder network fusing transformers and convolutional neural networks (DTrC-Net) is proposed in this study. The structure of the DTrC-Net was designed to capture both the local features and global contextual information of crack images. To enhance feature fusion between the adjacent and codec layers, a feature fusion module and a residual path module were also added to the network. Through a series of comparative experiments, DTrC-Net was found to generate better predictions than other state-of-the-art segmentation networks, with the highest precision (75.60%), recall (78.86%), F1-score (76.44%), and intersection over union (64.30%) on the Crack3238 dataset. Moreover, a fast processing speed of 78 frames per second was achieved using the DTrC-Net with an image size of 256 × 256 pixels. Overall, it was found that the proposed DTrC-Net outperformed other advanced networks in terms of accuracy in crack segmentation and demonstrated superior generalizability in complex scenes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.