Abstract

In industrial processes, the ability to predict future steps is essential as it offers long-term insights, benefiting strategic decision-making. However, traditional sequence-to-sequence models designed to predict dynamic behaviors suffer from accumulating errors during recurrent predictions which use previous outputs as inputs for the next time step. In this article, we propose a dual attention-based encoder–decoder framework, specifically designed to enhance multi-step ahead predictions in industrial processes. The dual attention model strategically minimizes the error accumulation of output sequence by leveraging a temporal attention mechanism, which focuses on relevant time-steps in the input sequence, and a supervised attention mechanism that assigns different weights to output sequence errors during training. The supervised attention method, in particular, provides a significant improvement by focusing on minimizing the error of earlier steps during backpropagation using predefined attention weights, resulting in enhanced overall multistep prediction performance. Experiments on real-world industrial datasets demonstrate that our approach outperforms baseline models, specifically simple sequence-to-sequence and single attention-based sequence-to-sequence models. In fact, our dual attention framework consistently surpasses single attention models, currently regarded as state-of-the-art, at all prediction stages. The suggested approach has potential applications in the field of process monitoring and model predictive control.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call