Abstract

Abstract Remaining useful life (RUL) prediction plays an indispensable role in the reliable operation and improved maintenance of rolling bearings. Currently, data-driven methods based on deep learning have made significant progress in RUL prediction. However, most of such methods only consider the correlation between channels, ignoring the importance of different time steps for RUL prediction. In addition, it is still challenging to effectively fuse the degradation features of rolling bearings to improve the model RUL prediction performance. To address the above issues, this paper proposes a novel data-driven RUL prediction method named dual-stream temporal convolution network (DSTCN). First, a hybrid attention temporal convolution block (HATCB) is designed to capture the correlation of degraded features on the channel dimension and temporal dimension. Second, a one-dimensional attention fusion module is designed. This module is capable of weight recalibration and assignment to adaptively fuse different degraded features. Afterward, the Hilbert Marginal spectrum is obtained using the Hilbert Huang Transform and used as the input to one stream. Meanwhile, vibration signals are used as the input of the other stream, thus building a dual-stream temporal convolution network to realize RUL prediction. The effectiveness of the proposed method is validated with two life-cycle datasets, and the results show that the method has lower prediction error than other methods for RUL prediction and prognostic analysis.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.