Abstract

Timely and accurate fault diagnosis is essential for modern industrial system reliability and safety. However, traditional bearing fault diagnosis using the residual neural network (ResNet) is limited by high parameters, computational costs, and low accuracy. To address this, a lightweight improved residual network (LIResNet) is proposed to enhance the ResNet structure. LIResNet introduces a parallel layer (PL) merging the discrete cosine transform convolutional layer (DCTConv) and the standard convolutional layer (Conv). This reduces trainable parameters, aiding feature extraction from input signals, with spectrograms preprocessed by CWT fed into the PL. Furthermore, integrating a self-attention mechanism enhances fault feature comprehension. The residual structure is refined using low-rank decomposition, reducing model complexity. This refinement, combined with a max-pooling layer and single convolutional layer, forms the downsampling residual module (DAM). The proposed batch-free normalization (BFN) and improved hard-swish activation function constitute the normalized activation module (NAM), ensuring model stability and nonlinear mapping in low-batch data situations. Experiments across bearing datasets reveal that the proposed method achieves a diagnostic accuracy of 99.8%, surpassing ResNet18′s 98.7%. Moreover, while ResNet comprises 11.18M parameters and 753.07M Flops, the proposed method uses 0.06M parameters and 49.49M Flops. These findings underscore the improved lightweight performance and diagnostic capability of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call