Abstract

The application of deep learning to fault diagnosis has made encouraging progress in recent years. However, it is hard to obtain sufficient labeled data to ensure the performance of diagnostic models, due to complex and varying working conditions. Over-fitting often occurs when few labeled data are used in training. To address this crucial problem, a novel transfer-learning method called the selective normalized multiscale convolutional adversarial network (SNMCAN) is proposed in this paper. The proposed model introduces multiscale convolutional neural networks (CNNs) to capture rich fault feature information at multiple scales. A batch normalization (BN) module, widely used in CNNs, is reconstructed into a new normalization method called ‘selective normalization’ to learn diagnostic knowledge from a pre-trained model and avoid over-fitting with limited labeled data. Joint maximum mean discrepancy (JMMD) is applied to minimize the joint distribution discrepancy between different domains and improve the results of domain alignment. An adversarial training strategy is also used in the proposed model to easily distinguish the distributions of the source and target domains. The superiority of the proposed method is demonstrated using two case studies. The case study results demonstrate that the SNMCAN can achieve better performance in fault diagnosis than comparison methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.