Abstract

Autoencoders and stacked autoencoders (SAEs) are efficient for detecting abnormal situations in process monitoring because of their powerful deep feature representation capability. However, SAEs are easy to overfit during training, thereby affecting this representation. Furthermore, several nodes of the same layer in the SAE carry duplicate information, and thus the features are strongly correlated. To solve these problems, a novel regularization strategy, in which the inner product is introduced, is proposed for the SAE to reduce overfitting more effectively. The modified SAE is called an inner product-based stacked autoencoder (IPSAE). SAEs aim to reduce the Euclidean distance between the output and input matrices through iterative calculation, whereas the IPSAE adds the inner products between the outputs of the neurons to the objective function to regularize the features and reduce feature redundancy. Hence, after determining the structure of the SAE, it is trained to lower the reconstruction error and inner product between the outputs of the neurons to improve the deep feature representation of the industrial process. The proposed model is applied to a numerical system and a Tennessee Eastman dataset, and demonstrates the best performance when compared with several state-of-the-art models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.