Abstract
Recently, an increasing popularity of data-driven deep learning research in the field of machine fault diagnosis has been observed. Stacked Denoising Autoencoder (SDA), as a classic type of deep learning method, has been successfully used to learn effective representations for machine fault diagnosis. However, those previous studies always encounter with the inherent limitations of SDA: high computational cost, time-consuming training, and lack of scalability to high-dimensional data. Unfortunately, those limitations can restrict the applicability of those studies in real-world applications, which require timely model upgrade and fast real-time diagnosis. Besides, most previous studies concentrate on the vibration signal, and thus lack the attention towards other kinds of sensor data like acoustical signal. Therefore, to address the two problems above, inspired by the marginalized Stacked Denoising Autoencoder (mSDA), we adopt a variant of SDA for fast fault diagnosis on sound signal. In this way, the required stochastic gradient descent based on back propagation in traditional deep learning methods is replaced by a forward closed-form solution. Opposite to the time-consuming works which demand training thousands of parameters during optimization, this deep architecture only needs to determine a few hyper parameters in advance. To verify the effectiveness and efficiency of the proposal on sound signal, extensive empirical evaluation on a publicly available sound signal dataset of gear fault is carried on. Thorough comparisons with some state-of-the-art faulty diagnosis approaches, confirm the superiority of the proposal in high diagnostic accuracy and lower computational cost.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.