Abstract

To address the issue of insufficient deep contextual information mining in the semantic segmentation task of multiple defects in concrete bridges, due to the diversity in texture, shape, and scale of the defects as well as significant differences in the background, we propose the Concrete Bridge Apparent Multi-Defect Segmentation Network (PID-MHENet) based on a PID encoder and multi-feature fusion. PID-MHENet consists of a PID encoder, skip connection, and decoder. The PID encoder adopts a multi-branch structure, including an integral branch and a proportional branch with a “thick and long” design principle and a differential branch with a “thin and short” design principle. The PID Aggregation Enhancement (PAE) combines the detail information of the proportional branch and the semantic information of the differential branch to enhance the fusion of contextual information and, at the same time, introduces the self-learning parameters, which can effectively extract the information of the boundary details of the lesions, the texture, and the background differences. The Multi-Feature Fusion Enhancement Decoding Block (MFEDB) in the decoding stage enhances the information and globally fuses the different feature maps introduced by the three-channel skip connection, which improves the segmentation accuracy of the network for the background similarity and the micro-defects. The experimental results show that the mean Pixel accuracy (mPa) and mean Intersection over Union (mIoU) values of PID-MHENet on the concrete bridge multi-defect semantic segmentation dataset improved by 5.17% and 5.46%, respectively, compared to the UNet network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call