Abstract
In recent years, deep transfer models have addressed the issue of distribution shift between the source and target domains by learning domain-invariant features. However, in cross-machine fault diagnosis scenarios, existing deep learning models struggle to fit the conditional distribution of target domain samples, limiting the performance and generalization of domain adaptation models. To address these problems, we propose a deep targeted transfer network with clustering pseudo-label learning (DTTN-CPLL). DTTN-CPLL consists of three parts. First, a deep transfer network is constructed to extract cross-domain features. To reduce the intra-class distribution gap, a clustering pseudo-label learning algorithm is proposed to create subdomain labels within the target domain features. Then, we simultaneously minimize and maximize the entropy of features and the number of linearly independent vectors in the target domain to reduce the distance between subdomain features. Finally, under the constraint of local maximum mean discrepancy, we reduce the distribution discrepancy between the source and target domains at the subdomain feature level. We conducted 12 cross-machine transfer tasks on three open bearing datasets and a private high-speed train traction motor bearing dataset. The results demonstrate that, compared to other state-of-the-art models, DTTN-CPLL is effective and superior in cross-machine fault diagnosis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.