Abstract

Condition-based monitoring is a key element to minimize plant upsets and production losses, guaranteeing at the same time safety and asset integrity, with the final goal of improving operational excellence. A key challenge for this purpose is the capability to anticipate unexpected behaviors, such as undesired trends or spikes in key sensor measurements, such as temperatures or vibrations, which might lead to equipment failures. For this goal, we implemented some innovative Deep Learning algorithms to predict the future trend of sensor variables related to the health condition of important pieces of equipment and subsystems of a big Eni's offshore facility. We showed that the prediction accuracy achieved by Deep Learning algorithms makes them ideal candidates in a real production setting. We present a multi-step Deep Learning pipeline consisting of: (a) Multivariate time-series resampling (b) model design and (c) model evaluation. Ten-months of historical sensor data were split into training and test sets (80%-20%) in chronological order. Two Deep Learning (DL) models were implemented: (a) sequence-to-sequence (seq2seq) LSTM (Long-Short Term Memory) encoder-decoder with and without attention-based mechanism (LSTM-EDA) (b) seq2seq temporal convolutional network (TCN). Many-to-one (output value predicted from multiple input values) and many-to-many (multiple outputs predicted from multiple input values) models were implemented using several prediction output windows (w = {4,16,32} time steps) as prediction horizons. Models were compared and evaluated with Mean Absolute Error, Root Mean Square error and R-squared metrics. To test whether training on forecast sensor data is beneficial, each model was trained on measured data and also on forecast data, with the forecasting horizon being from 4 up to 32 time steps, i.e. 8 to 48 hours. To evaluate the forecasting performance, we calculated the root mean square error (RMSE) and MAE between the actual values and predicted values. The RMSE and MAE can reflect the discrepancy between the actual values and predicted ones, while the R-squared can represent the trend accuracy of each output window between the actual data and predicted data. Overall, DL models (LSTM-EDA and TCN) performed with high accuracy showing very low MAE errors (0.03 and 0.02), RMSE (0.05 and 0.03) and R-squared (0.94 and 0.85). With reference to the width of the output horizon (w= {4,16, 32}) the model showed that the larger the horizon, the harder the prediction task and model accuracy. The MAE and RMSE of the trained DL models increases with the forecasting horizon because predicted output signals are less accurate for larger forecasting horizons. In addition, we showed that both models work well when predicting signal trends, while sudden signal spikes remain hard to predict with the same accuracy. Multivariate time-series forecasting faces a major research challenge to capture and leverage the dynamics dependencies among multiple variables. DL is by far the most promising technology on this area. Recent developments on DL model offer a clear advantage to summarize data with many abstraction layers. By in-depth analysis and empirical evidence, we showed the efficiency of the architecture of LSTM-EDA and TCN models, which indeed successfully captures both short-term and long-term repeating patterns and uses them to effectively forecast the future values of one or more sensor signals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call