Abstract
With advancements in artificial intelligence and computer vision, machine learning has become widely employed in location and detection of road pavement distresses. Recently, recognition methods based on convolutional neural networks (CNNs) have been implemented to segment pavement cracks at pixel level in order to evaluate the pavement condition. However, this method usually consists of some common processes, including manually predetermining the approximate location of cracks followed by selecting the image containing the cracks and then performing pixel-level segmentation, which is why it is worth automating the preprocessing to replace the manual selection step. Moreover, the issues of a low proportion of positive samples, complex crack topologies, different inset conditions, and complex pavement background make the task of automatic pavement location more challenging. Therefore, this paper proposes a novel method for preprocessing crack recognition, which automatically locates cracks and yields great savings in labor costs. Specifically, a real-world road pavement crack data set obtained from a common digital camera mounted on a vehicle is built to test the proposed crack location method, called Double-Head. It improves the accuracy of crack object localization by using an independent fully connected head (fc-head) and a convolution head (conv-head). The results show that our method improves average precision (AP) 6.5% over Faster R-CNN using only a fc-head, and outperforms many advanced object detection methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.