Abstract

Rotating machinery intelligent diagnosis based on deep learning (DL) has gone through tremendous progress, which can help reduce costly breakdowns. However, different datasets and hyper-parameters are recommended to be used, and few open source codes are publicly available, resulting in unfair comparisons and ineffective improvement. To address these issues, we perform a comprehensive evaluation of four models, including multi-layer perception (MLP), auto-encoder (AE), convolutional neural network (CNN), and recurrent neural network (RNN), with seven datasets to provide a benchmark study. We first gather nine publicly available datasets and give a comprehensive benchmark study of DL-based models with two data split strategies, five input formats, three normalization methods, and four augmentation methods. Second, we integrate the whole evaluation codes into a code library and release it to the public for better comparisons. Third, we use specific-designed cases to point out the existing issues, including class imbalance, generalization ability, interpretability, few-shot learning, and model selection. Finally, we release a unified code framework for comparing and testing models fairly and quickly, emphasize the importance of open source codes, provide the baseline accuracy (a lower bound), and discuss existing issues in this field. The code library is available at: https://github.com/ZhaoZhibin/DL-based-Intelligent-Diagnosis-Benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call