Abstract

Remaining useful life (RUL) prediction is a key enabler for making optimal maintenance strategies. Data-driven approaches, especially employing neural networks (NNs) such as multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs), have gained increasing attention in the field of RUL prediction. Most of the past research has mainly focused on minimizing the RUL prediction error by training NNs with back-propagation (BP), which in general requires an extensive computational effort. However, in practice, such BP-based NNs (BPNNs) may not be affordable in industrial contexts that normally seek to save cost by minimizing access to expensive computing infrastructures. Driven by this motivation, here, we propose: (1) to use a very fast learning scheme called extreme learning machine (ELM) for training two different kinds of feed-forward neural networks (FFNNs), namely a single-layer feed-forward neural network (SL-FFNN) and a Convolutional ELM (CELM); and (2) to optimize the architecture of those networks by applying evolutionary computation. More specifically, we employ a multi-objective optimization (MOO) technique to search for the best network architectures in terms of trade-off between RUL prediction error and number of trainable parameters, the latter being correlated with computational effort. In our experiments, we test our methods on a widely used benchmark dataset, the C-MAPSS, on which we search such trade-off solutions. Compared to other methods based on BPNNs, our methods outperform a MLP and show a similar level of performance to a CNN in terms of prediction error, while using a much smaller (up to two orders of magnitude) number of trainable parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call