Abstract

Abstract Deep learning (DL)-based approaches have demonstrated remarkable performance in predicting the remaining useful life (RUL) of complex systems, which is beneficial for making timely maintenance decisions. However, the majority of these DL methods suffer from a lack of interpretability, and it is difficult to mine the degradation features in the presence of significant measurement noises. To remedy the deficiency, a multi-channel fusion variational autoencoder (MCFVAE)-based approach is proposed. A feature fusion module is designed to capture and fuse the multi-channel features, which facilitates the disclosure of the degradation information from the multi-sensor data. A variational inference module is further introduced to generate the compressive representations and project them into a latent space as an interpretable component, which can display the degradation degree of the multi-sensor systems. A regressor module is finally utilized to establish the relationship between the compressive representations and the RUL. The superior feature fusion and distribution characteristics learning abilities of the MCFVAE contribute to achieving robust and interpretable RUL prediction. The effectiveness and superiority of the proposed method are experimentally validated through a publicly available Commercial modular aero propulsion system simulation dataset and compared with the existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.