Abstract

For prognostics and health management of industrial systems, machine remaining useful life (RUL) prediction is an essential task. While deep learning-based methods have achieved great successes in RUL prediction tasks, large-scale neural networks are still difficult to deploy on edge devices owing to the constraints of memory capacity and computing power. In this paper, we propose a lightweight and adaptive knowledge distillation (KD) framework to alleviate this problem. Firstly, multiple teacher models are compressed into a student model through KD to improve the industrial prediction accuracy. Secondly, a dynamic exiting method is studied to enable an adaptive inference on the distilled student model. Finally, we develop a reparameterization scheme to further lessen the student network. Experiments on two turbofan engine degradation datasets and a bearing degradation dataset demonstrate that our method significantly outperforms the state-of-the-art KD methods and enables the distilled model with an adaptive inference ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call