Abstract

In practical applications of fault diagnosis, several factors, including fluctuations in load, changes in equipment condition, and environmental noise effects, could cause a classifier that’s been trained on the source domain to be ill-suited for matching data from the target domain. Unsupervised domain adaptation techniques have been developed to tackle this issue, but they typically demand access to fully labeled source domains, ignoring concerns of privacy regarding source domain data. Therefore, we consider a new research scene for source-free unsupervised domain adaptation (SFUDA), which exclusively relies on a source model trained on source domain sample without requiring access to fully labeled source domain data. This paper introduces a SFUDA approach that utilizes knowledge distillation (KD), which involves two stages: (1) generalizing the source model by applying domain augmentation techniques and LS methods that enhance the model’s potential to enhance its generalization capability; (2) adapting the target model using a KD framework to achieve knowledge migration; and in addition, mutual information structure regularization is added to consider the internal data structure, thus enhancing the model’s adaptability. To evaluate the efficacy of our approach, we perform experiments on two datasets—the Case Western Reserve University dataset and the Paderborn University dataset, comprising 24 transfer tasks. Our experiments demonstrate the effectiveness of the domain augmentation technique, mutual information regularization, and the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call