Abstract

Fault diagnosis is very important for condition based maintenance. Recently, deep learning models are introduced to learn hierarchical representations from raw data instead of using hand-crafted features, which exhibit excellent performance. The success of current deep learning lies in: 1) the training (source domain) and testing (target domain) datasets are from the same feature distribution; 2) Enough labeled data with fault information exist. However, because the machine operates under a non-stationary working condition, the trained model built on the source domain can not be directly applied on the target domain. Moreover, since no sufficient labeled or even unlabeled data are available in target domain, collecting the labeled data and building the model from scratch is time-consuming and expensive. Motivated by transfer learning (TL), we present a new fault diagnosis method, which generalizes convolutional neural network (CNN) to TL scenario. Two layers with regard to task-specific features are adapted in a layer-wise way to regularize the parameters of CNN. What's more, the domain loss is calculated by a linear combination of multiple Gaussian kernels so that the ability of adaptation is enhanced compared to single kernel. Through these two means, the distribution discrepancy is reduced and the transferable features are learned. The proposed method is validated by transfer fault diagnosis experiments. Compared to CNN without domain adaptation and shallow transfer learning methods, the proposed method gets the best performance for fault classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call