Abstract

By providing accurate and efficient crack detection and localization, image-based crack detection methodologies can facilitate the decision-making and rehabilitation of the roadway infrastructure. Deep convolutional neural network, as one of the most prevailing image-based methodologies on object recognition, has been extensively adopted for crack classification tasks in the recent decade. For most of the current deep convolutional neural network–based techniques, either intensity or range image data are utilized to interpret the crack presence. However, the complexities in real-world data may impair the robustness of deep convolutional neural network architecture in its ability to analyze image data with various types of disturbances, such as low contrast in intensity images and shallow cracks in range images. The detection performance under these disturbances is important to protect the investment in infrastructure, as it can reveal the trend of crack evolution and provide information at an early stage to promote precautionary measures. This article proposes novel deep convolutional neural network–based roadway classification tools and investigates their performance from the perspective of using heterogeneous image fusion. A vehicle-mounted laser imaging system is adopted for data acquisition (DAQ) on concrete roadways with a depth resolution of 0.1 mm and an accuracy of 0.4 mm. In total, four types of image data including raw intensity, raw range, filtered range, and fused raw image data are utilized to train and test the deep convolutional neural network architectures proposed in this study. The experimental cases demonstrate that the proposed data fusion approach can reduce false detections and thus results in an improvement of 4.5%, 1.2%, and 0.7% in the F-measure value, respectively, compared to utilizing the raw intensity, raw range, and filtered range image data for analysis. Furthermore, in another experimental case, two novel deep convolutional neural network architectures proposed in this study are compared to exploit the fused raw image data, and the one leading to better classification performance is determined.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.