Abstract

The identification and recognition of pavement distress are vital for automatic pavement evaluation. The computation efficiency and accuracy are the two factors that determine the evaluation model. Our study utilizes the state-of-the-art image segmentation method of DeepLabV3+ with the attention mechanism. This study aims to conduct the pavement distress segmentation on the Crack500 and GAPs384 datasets. Critical results are comprehensively provided with a comparison of different backbones and architectures. In this study, an adaptive probabilistic sampling method is proposed and adopted to compare with the random crop and resized images. The test result of the adaptive prob-sampling method on DeepLabV3-attention architecture outperforms other model results on the Crack500 dataset. On the other way, the test result of the adaptive prob-sampling method on DeepLabV3 without attention architecture works better in the GAPs384 dataset. The different results relied on the diverse characteristics of datasets, since GAPs384 datasets have different asphalt surface types and a wide variety of distress classes rather than simple crack information which Crack500 included. In addition, some performance improvement methods, such as batch normalization of image input, revising dice loss, and hyperparameters search, have been implemented in this work. The results are solid and reliable for concluding the critical analysis of methods and datasets. This study demonstrates the considerable potential of deep learning in the application of intelligent pavement evaluation. To advance the practical application of artificial intelligence on distress segmentation, we will explore more domain adaptive methods in future studies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.