In actual industrial scenarios, collecting a complete dataset with all fault categories under the same conditions is challenging, leading to a loss in fault category knowledge in single-source domains. Deep learning domain adaptation methods face difficulties in multi-source scenarios due to insufficient labeled data and significant distribution differences, hindering domain-specific knowledge transfer and reducing fault diagnosis efficiency. To address these issues, the Dynamic Similarity-guided Multi-source Domain Adaptation Network (DS-MDAN) is proposed. This method leverages incomplete data from multiple-source domains to address distribution disparities in deep domain adaptation. It enhances diagnostic performance in the target domain by transferring knowledge across diverse domains. DS-MDAN uses convolution kernels of different scales to extract multi-scale feature information and achieves feature fusion through upsampling and operations like addition and concatenation. Adversarial training with domain and fault classifiers optimizes feature extraction for widely applicable representations. The similarity between source and target domain data is calculated based on features extracted by a shared-weight network, dynamically adjusting the contribution of different source domain data to minimize distribution differences. Finally, matched source and target domain samples are mapped to the same feature space for fault diagnosis. Experimental validation on various bearing fault datasets shows that DS-MDAN improves performance in multiple fault diagnosis tasks, increasing accuracy and demonstrating good generalization capabilities.
Read full abstract