Abstract

Multimodality is a universal characteristic of multi-source monitoring data for rotating machinery. The correlation fusion of multimodal information is a general law to strengthen the cognition of fault features, and an effective way to improve the reliability and robustness of fault diagnosis methods. However, the physical connotation gaps between multimodal information hinder the construction of correlations, preventing mainstream machine learning (ML) based intelligent diagnosis methods from reliably and effectively taking advantage of multimodal fusion. To address these issues, a physics-inspired multimodal fusion convolutional neural network (PMFN) is proposed in this paper. It is the first attempt to integrate physical knowledge into ML models to bridge physical connotation gaps between multimodal fault information. Specifically, the characterization patterns of rotating machinery fault in multimodal information are embedded in the attention mechanism to focus on representative fault features with physical properties, and generate the universal representation of multimodal information. Furthermore, the cross-modal correlation fusion module is introduced to adaptively construct the correlations of multimodal information, thereby highlighting the unique feature of unimodal information and the shared representation of multimodal information. Finally, the superiority of the proposed fusion method is verified by two cases of industrial gearbox and bearing-rotor system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call