Abstract

Under high noise conditions and random impacts, which constitute strong interference, models often exhibit limited capability in capturing long-term dependencies, leading to lower accuracy in predicting the remaining useful life (RUL) of bearings. To address this issue, a spatiotemporal fusion network capable of ultra-long-term feature analysis is proposed to enhance the accuracy of bearing RUL prediction under substantial interference. This network utilizes a dilated convolution-based lightweight vision transformer encoder to extract spatial features reflecting the short-term degradation state of the bearing. Then, these features are sequentially fed into an adaptive tiered memory unit, based on the multiple attention mechanism and the neuron layering mechanism, to analyze temporal features indicative of long-term degradation. Subsequently, short-term spatial and long-term temporal features are fused for RUL prediction. To validate the robustness and predictive accuracy of the proposed approach under strong interference, a gearbox-rolling bearing accelerated platform is constructed, simulating high noise and random impact conditions. Experiments confirm the high robustness and predictive accuracy of the proposed method under strong interference conditions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.