Abstract
Feature extraction from a time sequence signal without manual information is an important part for bearing intelligent diagnosis. With the merits of signal information and feature structure information excavation, Deep ConvNet is widely used in bearing fault diagnosis and analysis under complex working conditions. However, due to the complexity of the bearing operating environment in the actual operation process, the sensitive features show different scale distribution characteristics. Meanwhile, it is known that the convolution kernel of ConvNet is usually small, which mainly focuses on the small-scale details of state distribution characteristics while ignores the identification of the overall trend of characteristic distribution. Considering that the size of convolution kernel can sense information hidden in different scales, this paper designed a one-dimensional vision ConvNet (VCN), where the architecture is composed of multilayer small kernel network and single-layer large kernel network side by side. The multi-kernel structure improves the ability of network to detect fault characteristic frequency band. By analyzing the artificially generated data and experimental data, the setting method of large convolution kernel and stride is discussed. Compared with the traditional CNN, wide first-layer kernels (WDCNN) and multiscale kernel-based ResCNN (MK-ResCNN), this network improves the recognition accuracy with a better stable training process for rolling bearing fault classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.