Abstract

Bearing fault diagnosis is of great significance to ensure the safe operation of mechanical equipment. This paper proposes an intelligent fault diagnosis method of rolling bearings based on deep belief network (DBN) with hyperparameter optimization by using parallel computing. Different with traditional diagnosis methods that extract the features manually depending on much prior knowledge about signal processing techniques and diagnostic expertise, DBN extracts fault features automatically by machine learning mechanism. Considering the time consuming problem, parallel computing is adopted to the DBN training process by using a Master/Slave mode to improve the training speed so that the global optimization with Genetic Algorithm and higher diagnosis accuracy can be achieved. Finally, the proposed method is verified with the public datasets provided by Case Western Reserve University (CWRU) with various fault depths in different locations and loads of rolling bearings. The results indicate that the proposed method can identify bearing faults under different conditions correctly which significantly enhances the intelligence of fault classification and reduces the time for parameter selection of deep learning models.

Highlights

  • With the proposal of ‘‘Industrial Internet’’ and ‘‘Industry 4.0’’, many countries from all over the world put forward different strategies to explore and promote intelligent manufacturing

  • It is noted that regarding this optimization problem, we only focus on the effect of learning rate and momentum on the deep belief network (DBN) performance and treat them as variables

  • The reason is that the communication time between multiple cores remains almost the same, whereas the proportion of communication time to the total running time decreases with larger sample size. These results show that using parallel computing can reduce the time greatly on parameter optimization and the computing performance improves as the task size increases, which proves the effectiveness of parallel computation in the parameter optimization of deep learning model for massive data

Read more

Summary

INTRODUCTION

With the proposal of ‘‘Industrial Internet’’ and ‘‘Industry 4.0’’, many countries from all over the world put forward different strategies to explore and promote intelligent manufacturing. Considering the strong computing ability, parallel computing is introduced to the training process of the DBN based fault diagnosis model with parameter optimization so that faster computing speed and higher classification accuracy can be achieved. In order to obtain the optimal hyperparameters of DBN, GA optimization is used for DBN hyperparameter selection so that the optimal hyperparameters can be found to ensure higher diagnosis accuracy for performance-indicator-related faults This provides an effective way for parameter selection of DBN and other deep learning models. Parallel computing is employed to improve the computing speed of the training process of a deep belief network based diagnosis model with GA optimization This GA and parallel computing integration method is a useful tool for performance-indicator-related fault diagnosis based on deep learning model when dealing with big data in the era of intelligent manufacturing.

RELATED WORK
DEEP BELIEF NETWORK
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call