Abstract

Prognostic health management (PHM) has become a crucial part in building highly automated systems, whose primary task is to precisely predict the remaining useful life (RUL) of the system. Recently, deep models including convolutional neural network (CNN) and long short-term memory (LSTM) have been widely-used to predict RUL. However, these models generally require sufficient labeled training data to guarantee fair performance, whereas a limited amount of labeled data overwhelmed by the abundance of unlabeled ones is the normality in industry, and the cost of full data annotation can be unaffordable. To attack this challenge, domain adaptation seeks to transfer the knowledge from a well-labeled source domain to another unlabeled target domain by mitigating their domain gap. In this paper, we leverage domain adaptation for RUL prediction and propose a novel method by aligning distributions at both the feature level and the semantic level. The proposed method facilitates a large improvement of model performance as well as faster convergence. Besides, we propose to use Transformer as backbone, which can capture long-term dependency more efficiently than the widely-used recurrent neural network (RNN), and is thus critical for boosting the robustness of the model. We test our model on CMAPSS dataset and its newly published variant N-CMAPSS provided by NASA, achieving state-of-the-art results on both source-only RUL prediction and domain adaptive RUL prediction tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call