Abstract

Deep learning-based remaining useful life (RUL) prediction methods have achieved great success due to their powerful capacity of feature representation especially when big data of condition monitoring is available. However, how to fuse multi-sensor information to facilitate RUL prediction accuracy remains a challenging problem due to the complex temporal and spatial dependencies within multi-sensor signals. To address this problem, we propose a dual-view graph Transformer, named as DVGTformer, for RUL prediction, which can fully learn potential degradation patterns from multi-sensor signals by capturing complex correlations within them. The proposed method involves the design of a novel graph Transformer, named as GTformer, by collaboratively integrating learnable graph adjacency matrix and multi-head self-attention to learn structural and dynamic correlations between the nodes of graphs. We then construct the DVGTformer for RUL prediction based on the GTformer. Each layer of a DVGTformer is formed by cascading a temporal-view GTformer layer and a spatial-view GTformer layer to fuse temporal and spatial information across time stamps and sensor nodes. Experimental results on the benchmark CMAPSS dataset and a wind turbine dataset from real applications show that our method consistently provides accurate and robust RUL prediction results compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call