Abstract

Deep learning is a powerful tool for feature representation, and many methods based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied on fault diagnoses for chemical processes. However, unlike attention mechanisms, these networks are inefficient when extracting features of long-term dependencies. The transformer method employs a self-attention mechanism and sequence-to-sequence model originally designed for natural language processing (NLP). This approach has attracted significant attention in recent years due to its great success in NLP fields. The fault diagnosis of a chemical process is a task based on multi-variable time series, which are similar to text sequences with a greater focus on long-term dependencies. This paper proposes a modified transformer model called Target Transformer, which includes not only a self-attention mechanism, but also a target-attention mechanism for chemical process fault diagnoses. The Tennessee Eastman (TE) process was used to evaluate our method’s performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call