Abstract

Gearbox fault diagnosis methods based on deep learning usually require a large amount of sample data for training, and these data are usually ideal experimental data without noise. However, due to the influence of complex environmental factors, a large number of effective fault samples may not be available and the sample data can be interfered with by noise, which affects the identification accuracy of fault diagnosis methods and the stability of diagnosis results. To improve the resistance to noise while achieving high diagnosis accuracy, a multi-scale Transformer convolution network (MTCN) based on transfer learning is proposed in this paper. Concretely, a multi-scale coarse-grained procedure is incorporated to capture different and complementary features from multiple scales and filter random noises to some extent. Meanwhile, the Transformer composed of an attention mechanism is utilized to extract high-level and effective features and the transfer learning strategy is applied to overcome the limitation of insufficient fault samples for model training. Finally, the experiments are conducted to verify the effectiveness of the proposed method. The results show that the proposed method has higher accuracy and robustness under noisy environments compared with previous methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call