Rolling bearings are often exposed to high speeds and pressures, leading to the symmetry in their rotating structure being disrupted, which can lead to serious failures. Intelligent rolling bearing fault diagnosis is a critical part of ensuring operation of machinery, and it has been facilitated by the growing popularity of convolutional neural networks (CNNs). The outstanding performance of fault diagnosis CNNs results from complex and redundant network structures and parameters, resulting in huge storage and computational requirements, which makes it challenging to implement these models in resource-limited industrial devices. This study aims to address this problem by proposing a comprehensive compression method for CNNs that is applied to intelligent fault diagnosis. It involves several different compression methods, including tensor train decomposition, parameter quantization, and knowledge distillation for deep network compression. This results in a significant decrease in redundancy and speeding up the training of CNN models. Firstly, tensor train decomposition is applied to reduce redundant connections in both convolutional and fully connected layers. The next step is to perform parameter quantization to minimize the bits needed for parameter representation and storage. Finally, knowledge distillation is used to restore accuracy to the compressed model. The effectiveness of the proposed approach is confirmed by an experiment and ablation study with different models on several datasets. The results show that it can significantly reduce redundant information and floating-point operations with little degradation in accuracy. Notably, on the CWRU dataset, with about 60% parameter reduction, there is no degradation in our model’s accuracy. The proposed approach is a new attempt at the intelligent fault diagnosis of rolling bearings in industrial equipment.
Read full abstract