In recent years, deep transfer models have addressed the issue of distribution shift between the source and target domains by learning domain-invariant features. However, in cross-machine fault diagnosis scenarios, existing deep learning models struggle to fit the conditional distribution of target domain samples, limiting the performance and generalization of domain adaptation models. To address these problems, we propose a deep targeted transfer network with clustering pseudo-label learning (DTTN-CPLL). DTTN-CPLL consists of three parts. First, a deep transfer network is constructed to extract cross-domain features. To reduce the intra-class distribution gap, a clustering pseudo-label learning algorithm is proposed to create subdomain labels within the target domain features. Then, we simultaneously minimize and maximize the entropy of features and the number of linearly independent vectors in the target domain to reduce the distance between subdomain features. Finally, under the constraint of local maximum mean discrepancy, we reduce the distribution discrepancy between the source and target domains at the subdomain feature level. We conducted 12 cross-machine transfer tasks on three open bearing datasets and a private high-speed train traction motor bearing dataset. The results demonstrate that, compared to other state-of-the-art models, DTTN-CPLL is effective and superior in cross-machine fault diagnosis.
Read full abstract