The Prognostics Health Management (PHM) of modern equipment typically employs Remaining Useful Life (RUL) prediction to assess health status. Existing mainstream RUL prediction methods have improved prediction performance by adopting data-driven approaches based on deep learning. However, most methods overlook the impact of noise interference caused by the redundancy of sensor data across channels, leading to a decrease in RUL prediction performance. Furthermore, these methods lack transparency and interpretability, which are crucial for maintenance personnel in accurately diagnosing the degradation process of equipment RUL. Therefore, we propose a Factorized temporal-channel fusion and Feature fusion based Variational Encoding (FFVE) for interpretable RUL prediction. By utilizing the factorization operation, we construct the Factorized Temporal-Channel Fusion (FTCF) block to learn temporal and channel dependencies, thereby reducing redundancy between channels. Through the feature fusion operations which mix original information with extracted feature information, we augment the original information that is lost during the deep network learning process, consequently avoiding performance degradation caused by increasing network depth. Through the above encoding process, sensor data is effectively compressed into a 3D latent space for predicting and interpreting the equipment degradation process. Extensive experiments were conducted on two datasets of the C-MAPSS turbofan aircraft engine and the NASA lithium-ion battery. The results demonstrate that our method outperforms state-of-the-art methods in terms of prediction accuracy. Additionally, our study provides a novel visual means of assessing turbofan engine health status, elucidating the engine RUL degradation process learned by our method.
Read full abstract