Abstract

Pavement distress detection is of great significance for road maintenance and to ensure road safety. At present, detection methods based on deep learning have achieved outstanding performance in related fields. However, these methods require large-scale training samples. For pavement distress detection, it is difficult to collect more images with pavement distress, and the types of pavement diseases are increasing with time, so it is impossible to ensure sufficient pavement distress samples to train the supervised deep model. In this article, we propose a new few-shot pavement distress detection method based on metric learning, which can effectively learn new categories from a few labeled samples. In this article, we adopt the backend network (ResNet18) to extract multilevel feature information from the base classes and then send the extracted features into the metric module. In the metric module, we introduce the attention mechanism to learn the feature attributes of “what” and “where” and focus the model on the desired characteristics. We also introduce a new metric loss function to maximize the distance between different categories while minimizing the distance between the same categories. In the testing stage, we calculate the cosine similarity between the support set and query set to complete novel category detection. The experimental results show that the proposed method significantly outperforms several benchmarking methods on the pavement distress dataset (the classification accuracies of 5-way 1-shot and 5-way 5-shot are 77.20% and 87.28%, respectively).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.