Abstract

This work presents a new method for sleeper crack identification based on cascade convolutional neural network (CNN) to address the problem of low efficiency and poor accuracy in the traditional detection method of sleeper crack identification. The proposed algorithm mainly includes improved You Only Look Once version 3 (YOLOv3) and the crack recognition network, where the crack recognition network includes two modules, the crack encoder-decoder network (CEDNet) and the crack residual refinement network (CRRNet). The improved YOLOv3 network is used to identify and locate cracks on sleepers and segment them after the sleeper on the ballast bed is extracted by using the gray projection method. The sleeper is inputted into CEDNet for crack feature extraction to predict the coarse crack saliency map. The prediction graph is inputted into CRRNet to improve its edge information and local region to achieve optimization. The accuracy of the crack identification model is improved by using a mixed loss function of binary cross-entropy (BCE), structural similarity index measure (SSIM), and intersection over union (IOU). Results show that this method can accurately detect the sleeper crack image. During object detection, the proposed method is compared with YOLOv3 in terms of directly locating sleeper cracks. It has an accuracy of 96.3%, a recall rate of 91.2%, a mean average precision (mAP) of 91.5%, and frames per second (FPS) of 76.6/s. In the crack extraction part, the F-weighted is 0.831, mean absolute error (MAE) is 0.0157, and area under the curve (AUC) is 0.9453. The proposed method has better recognition, higher efficiency, and robustness compared with the other network models.

Highlights

  • China’s total railroad mileage is expected to exceed 128,000 km by the end of 2020, prompting researchers to improve maintenance techniques for railroad infrastructure [1]

  • This method has been developed, it still has the limitations of the use methods and the common problem of poor crack detection. e efficiency and accuracy of crack detection have been enhanced with the development of the computer vision technology. e main methods applied to this field are as follows: image Journal of Advanced Transportation processing-based methods [3], machine learning-based methods [4], and deep convolutional neural network (DCNN)-based methods [5]. e methods represented by DCNNs are subdivided into methods based on image classification [5], object detection [6], and pixel-level segmentation [7], depending on the way the crack detection problem is handled. e network used to detect cracks in sleepers in this cascade is based on the latter two types of methods

  • We propose a method for detecting cracks in rail sleepers based on DCNN to address the lack of accuracy in crack detection in crack recognition. e CNN used consists of a modified You Only Look Once version 3 (YOLOv3) network for localization and crack encoder-decoder network (CEDNet) and crack residual refinement network (CRRNet) for extracting and optimizing the rail sleeper crack features, respectively

Read more

Summary

Introduction

China’s total railroad mileage is expected to exceed 128,000 km by the end of 2020, prompting researchers to improve maintenance techniques for railroad infrastructure [1]. E network used to detect cracks in sleepers in this cascade is based on the latter two types of methods. Cheng et al [18] proposed an automatic U-Net-based road crack detection method and tested it in a crack dataset to obtain a high pixel-level segmentation accuracy. We add the squeeze and excitation (SE) module at the end of the YOLOv3 backbone network to improve the crack region extraction accuracy. E shallow information of the crack image can be passed to the corresponding decoding process after the feature extraction of the input rail cracks by the coding part of CEDNet. the low-level detail features are fused with the high-level complex semantics to improve the network feature extraction performance.

Method Overview
Experiment and Results
Evaluation metrics
Conclusion and Expectations

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.