Abstract

Abstract Recurrent neural network (RNN) is a dynamic neural network where the current network output is related to the previous outputs. Long short-term memory network (LSTM) has emerged as a high-performance RNN. However, the original LSTM does not consider variable and sample relevance for process modelling. To overcome this problem, the paper proposes a Dual-layer Attention-based LSTM (DA-LSTM) network to model a fed-batch fermentation process. In the proposed DA-LSTM, LSTM is used to extract features of the input data and multiple time series results of the hidden layer, an encoder input attention mechanism is to select relevant driving series in the input data sequence, and a temporal decoder attention mechanism is used to measure the importance of encoder hidden states. The model with this deep architecture for high-level representations can learn very complex dynamic systems. To demonstrate the effectiveness of the proposed method, a comparative study with the original LSTM, signal attention-based LSTM is carried out. It is shown that the proposed method gives better modelling performance than others.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.