Abstract

Data-driven intelligent fault diagnosis methods automatically construct the laws of mechanical faults via mining and learning from monitoring data. However, the performance of highly accurate intelligent fault diagnosis models severely depends on the extremely limited availability of high-quality labeled data in industrial scenarios. Insufficient training data even cause the data-driven models to learn the wrong classification boundaries. Therefore, this study proposes a prior knowledge-enhanced self-supervised learning framework using time-frequency invariance, aiming to reduce the amount of training data required for the model. First, 12 priori time-domain features and 12 priori frequency-domain features are established as pseudo-labels for the time-domain feature extractor and frequency-domain feature extractor, respectively. Then, inspired by the transformation relationship between the frequency and time domains of the signal, time-frequency invariance is proposed to enhance the feature learning of the model on the signal in the pre-training stage. The time-domain pseudo label, the frequency-domain pseudo label and the time-frequency domain invariance together constitute the pretext task on the unlabeled data in the pre-training stage. The prior knowledge-enhanced pretext task has the potential to mine richer features from unlabeled monitoring data. Three experiments on mechanical failure datasets validate the effectiveness of the framework in small sample tasks and downstream transfer tasks. Furthermore, ablation experiments also validate the effectiveness of the time-frequency domain network framework proposed in this paper. The framework demonstrates great potential for industrial utilization from the perspective of prior diagnostic knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call