Abstract
The rapid development of big data leads to many researchers focusing on improving bearing fault classification accuracy using deep learning models. However, implementing a deep learning model on a limited resources platform such as the smartphone or STM32 includes two difficulties: making the model as lightweight as possible and reducing the dependence on large training samples. To this end, a self-attention ensemble lightweight model combined with the transfer learning (SLTL) method is proposed to solve these intractable problems, which are “small, light, and fast.” Firstly, the raw vibration signal is constructed into time–frequency images by continuous wavelet transform (CWT). Secondly, we build a self-attention lightweight convolutional neural network (SLCNN) model by integrating a self-attention mechanism (SAM) into the optimized SqueezeNet model. Then, based on a well-trained SLCNN in ImageNet, rich parameter knowledge is transferred from the pre-trained model to the target model. Finally, the fever training samples are used to fine-tune the target model. Experimental results on two bearing datasets validate the effectiveness of the SLTL method, which achieves 99.5% of classification accuracy with fewer training samples than other conventional CNN models. More importantly, the model parameters of SLTL are 0.95 M, and the floating-point operations (FLOPs) are 0.11 M, indicating that SLTL possesses high accuracy while maintaining lightweight, which benefits the platform with limited resources.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.