Abstract

The railway system is a prime example of a safety-critical system. Predicting the causes of railway accidents holds immense significance in enhancing railway transportation safety. Previous approaches to railway causation analysis have encountered huge challenges regarding data processing and analytical capabilities. To address this concern, this paper proposes an innovative deep model framework based on the Transformer architecture that utilizes historical data on railway equipment accidents to predict the causes behind such incidents. Firstly, this paper proposes the utilization of Convolutional Block Attention in the domain of text processing, serving as a lexical encoder to augment word semantics acquisition in accident texts. Subsequently, in order to address the deficiency of traditional Transformers that lack positional representation information, we propose incorporating a BiGRU (Bidirectional Gated Recurrent Unit) as a contextual positional information encoder to capture contextual positional information in railway accident data effectively. Finally, considering that accident data reports are discrete tabular data, this study suggests employing cue word techniques for preprocessing accident data to alleviate the model's learning burden. We applied the proposed model to the FRA (Federal Railroad Administration) dataset. The results demonstrate that our model surpasses the current state-of-the-art language models, exhibiting superior performance compared to the optimal model with a notable improvement of 3.56%, 0.42%, and 0.76% in Precision, Recall, and F1-score, respectively. Furthermore, our model accurately predicts accident categories prone to misjudgment even when trained on limited data, outperforming existing language models. The study findings will contribute to the prevention and management of railway accidents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call