Abstract

Deep learning-based fault diagnosis uses rich labeled data in achieving promising performance. However, in most real cases, only few labeled data can be acquired in fault diagnosis tasks, which is far from the essential requirement for training a deep model from scratch. To tackle this problem, a novel two-step fine-tuning process is primarily proposed, based on the idea of implementing information from a relevant auxiliary task via tuning less task-specific weights, which expands the fine-tuning method. A lightweight model is adapted for lower data consumption. Furthermore, a distance loss function is designed and embedded into the training process with a dynamic tuning process for sparser feature representations. Comprehensive experiments have been carried out. Results prove the effectiveness of the proposed method, which evidently improves the diagnosis performance and robustness in diagnosis tasks with limited data and has the potential for application to fault diagnosis of varying working conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call