Abstract

AbstractDistress segmentation assigns each pixel of a pavement image to one distress class or background, which provides a simplified representation for distress detection and measurement. Even though remarkably benefiting from deep learning, distress segmentation still faces the problems of poor calibration and multimodel fusion. This study has proposed a deep neural network by combining the Dempster–Shafer theory (DST) and a transformer network for pavement distress segmentation. The network, called the evidential segmentation transformer, uses its transformer backbone to obtain pixel‐wise features from input images. The features are then converted into pixel‐wise mass functions by a DST‐based evidence layer. The pixel‐wise masses are utilized for performing distress segmentation based on the pignistic criterion. The proposed network is iteratively trained by a new learning strategy, which represents uncertain information of ambiguous pixels by mass functions. In addition, an evidential fusion strategy is proposed to fuse heterogeneous transformers with different distress classes. Experiments using three public data sets (Pavementscape, Crack500, and CrackDataset) show that the proposed networks achieve state‐of‐the‐art accuracy and calibration on distress segmentation, which allows for measuring the distress shapes more accurately and stably. The proposed fusion strategy combines heterogeneous transformers while remaining a performance not less than those of the individual networks on their respective data sets. Thus, the fusion strategy makes it possible to use the existing networks to build a more general and accurate one for distress segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.