Abstract

The application of traditional deep learning methods for intelligent fault diagnosis is limited by the distribution discrepancy of the unlabeled data collected under different working conditions. Transfer learning can break through this limitation by generalizing a model trained on the source domain with massive labeled data to solve the fault diagnosis problem in the target domain with unlabeled data. The current transfer learning methods focus on directly measuring and minimizing the distribution discrepancy of the features between the two domains. These methods may confront difficulties when the distributions between the domains are complex and heterogeneous, and may cause the incorrect alignment of the same class data with the greatest distribution discrepancy across the two domains. In this paper, a deep transfer learning method with inter-domain decision discrepancy minimization (InDo-DDM) is proposed. The proposed method directly measures and minimizes the discrepancy of the decision result matrixes to facilitate the minimization of the distribution discrepancy between the two-domain data. With the proposed domain indicator, the InDo-DDM can find the greatest decision discrepancy and better align the data with the greatest distribution discrepancy. Additionally, the measurement of the decision discrepancy can be more precise and robust by introducing the nuclear-norm to avoid the fallible data classification near the decision boundary caused by the intra-batch imbalance. Extensive experiments in three different scenarios with two datasets from Case Western Reserve University (CWRU) and one dataset from Prognostic and Health Management (PHM) Data Challenge revealed that the InDo-DDM outperformed the other widely used methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call