Deep learning has greatly succeeded in fault diagnosis (FD). However, most existing methods require enormous parameters to train the model, limiting its usage on lightweight terminals. Besides, they ignore the correlations between FD and other tasks that reduce performance. Therefore, this article presents a lightweight, end-to-end framework, densely supervised multitask one-dimensional convolution neural network (DSMT-1DCNN) for FD using raw signals. The DSMT-1DCNN designs three blocks, including the main network, densely supervised network, and multitask learning blocks, to extract rich, distinguishable, and less-noise hidden features for FD more accurately. Firstly, the main network block (MNB) applies 1DCNN to extract common features shared by different tasks from raw signals. Secondly, a novel supervised learning scheme (SLS) is designed in the densely supervised block (DSB) to supervise each 1-D convolution layer to learn hidden features thoroughly in a lightweight mode. In DSB, each convolution layer learned features are fused as the fusion sharing features. Finally, the fusion-sharing features are fed into the multitask learning block (MTLB) for FD. It utilizes two auxiliary tasks, including the speed identification branch (SIB) and load identification branch (LIB), to assist the FD branch (FDB) in extracting rich distinguishable features that consider their correlations. Therefore, the comprehensive feature utilized for FD is a fusion of densely CNN features, speed, and load information, which enables FD to be more accurate. The comparative experimental results and analysis on three public subsets confirmed its state-of-the-art in terms of accuracy and model size. Also, we proved that SLS has a good generality based on three 1DCNN structures.
Read full abstract