Abstract

Timely detection and identification of rail breaks are crucial for safety and reliability of railway networks. This article proposes a new deep learning-based approach using the daily monitoring data from in-service trains. A time-series generative adversarial network (TimeGAN) is employed to mitigate the problem of data imbalance and preserve the temporal dynamics for generating synthetic rail breaks. A feature-level attention-based bidirectional recurrent neural network (AM-BRNN) is proposed to enhance feature extraction and capture two-direction dependencies in sequential data for accurate prediction. The proposed approach is implemented on a three-year dataset collected from a section of railroads (up to 350 km) in Australia. A real-life validation is carried out to evaluate the prediction performance of the proposed model, where historical data are used to train the model and future “unseen” rail breaks along the whole track section are used for testing. The results show that the model can successfully predict nine out of 11 rail breaks three months ahead of time with a false prediction of nonbreak of 8.2%. Predicting rail breaks three months ahead of time will provide railroads enough time for maintenance planning. Given the prediction results, a Shapley additive explanations (SHAP) method is employed to perform a cause analysis for individual rail break. The results of cause analysis can assist railroads to plan appropriate maintenance to prevent rail breaks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.