Abstract

Visual process monitoring is to monitor industrial processes by projecting the high-dimensional process data into the two-dimensional space, which provides powerful insight for industrial processes, and accelerates fault diagnosis. The challenge of visual process monitoring lies in how to project the complex process data into the two-dimensional plane and separate different classes as much as possible. In this paper, a new visual process monitoring method is proposed. First, a stacked reinforced discriminant auto-encoder (SRDAE) which consists of multiple reinforced discriminant auto-encoders (RDAEs) is proposed to extract discriminant features. In SRDAE, the useful features in the original data and the hidden output of the previous RDAE are combined together as the input of the latter RDAE, and the error of class label is added into the loss function of RDAE. Therefore, SRDAE can prevent the loss of useful information in the original data in the high layers and make the extracted features have the powerful ability to separate different classes. Furthermore, in order to extract the more informative discriminant features, minimal redundancy maximal relevance (MRMR) technology is utilized to select important neurons from all layers of the SRDAE as the final feature representation of the original data. Finally, a stacked supervised t-distributed stochastic neighbor embedding network is proposed to visualize the discriminant features for process monitoring. The effectiveness of the proposed method is validated on the Tennessee Eastman process, the experiments show that the proposed method can effectively separate different classes to achieve intuitive and efficient process monitoring.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.