Abstract

Convolutional neural networks (CNN) have achieved unprecedented success in motor fault diagnosis. However, the following two challenges still exist: 1) most existing CNN-based diagnostic models are developed under the assumption that the motor operates under stable conditions; however, the mechanical vibration signals are often collected from various nonstationary scenarios (e.g., variable-speed and noisy scenarios), so the fault-related fluctuation information are easily overwhelmed by interference noise; and 2) previous studies often adopt a large multiscale CNN architecture to extract abundant features, which will inevitably introduces more parameters. In this study, we devote to developing a lightweight multiscale CNN model for motor fault identification under nonstationary conditions. First of all, inspired by the human vision systems, two lightweight hierarchical perception modules (HPMs), namely HPM1 and HPM2 are introduced. Specifically, HPMs adopt dilated convolutions with various kernels to model human receptive fields and use a dense connection strategy to simulate the visual hierarchies. HPM1 and HPM2 can extract rich multiscale features for fault classification tasks. Second, a joint attention module is explored to guide the model not only to pay attention to local spatial-wise information but also to global channel-wise information. Finally, a lightweight multiscale CNN model named coupled visual perceptual network (CVPN) is proposed based on the aforementioned improvements. The CVPN model has far fewer parameters than the state-of-the-art multiscale CNN models. Lab and field experimental results demonstrate that the proposed CVPN outperforms the state-of-the-arts with significant improvements under various nonstationary conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call