The domain shift of sample distribution caused by sharp speed variation dissatisfies the general assumption of stationary conditions, which renders a severe challenge for a majority of existing intelligent fault diagnosis methods. Moreover, data deficiencies in industrial applications further compromise diagnostic accuracy and reliability. To break the predicament of fault diagnosis under sharp speed variation with few samples, we developed an Attentional Contrastive Calibrated Transformer (ACCT) of time series. First, a plurality of convolution layers is used to capture low-level local structure features. Then, the transformer is applied to the sequences of split patches for modeling global dependencies and extracting the domain invariant features. Meanwhile, a data augmentation strategy based on regional mixing is used to enhance the generalization. Furthermore, to obtain a more discriminative feature representation, we designed a regularization based on unsupervised contrastive learning for calibration of attention distribution. The results demonstrated that transformers have an aptitude for analyzing time series data even under sharp variation, which do not need to deliberately consider extra modules for cross-domain disentanglement. The proposed method is superior to several advanced transformers in three case studies under speed transient conditions with few samples.
Read full abstract