Abstract

Deep Neural Network (DNN) models have been extensively developed for intelligent bearing fault diagnosis. The superior performance of DNN-based fault diagnosis methods is attributed to the deepening network structure, which increases the complexity of the deep models and causes difficulties to deploy them in industrial environments with constrained resources. To address the problems, this paper proposes a multi-hierarchical compression to compact the deep neural network for intelligent fault diagnosis. It combines network pruning, parameter quantization, and matrix compression to process the deep model from different perspectives, which achieves a considerable reduction of parameter volume and acceleration of training and response. Firstly, structured pruning is employed to prune the inconsequential filters in the convolutional layer, and unstructured pruning is utilized to eliminate the trivial connections in the fully connected layer. Then, parameter quantization is applied to minimize the number of bits that are required for parameter representation. Finally, the matrix compression storage is taken advantage of to reduce the requirements of storage capacity and further alleviate the excessive demands for the monitoring system. Experimental results and comparisons on two bearing datasets validate the effectiveness of the proposed method. The results show that for two CNN networks with different depths, the integrated network compression method can achieve considerable reductions in the number of parameters and the floating-point operations while almost no decrease in recognition accuracy. The proposed method makes an inspiring exploration for the application of intelligent fault diagnosis in the practical industry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call