Abstract
Kernel principal component analysis (KPCA)-based process monitoring methods have recently shown to be very effective for monitoring nonlinear processes. However, their performances largely depend on the kernel function and currently there is no general rule for kernel selection. Existing methods simply choose the kernel function empirically or experimentally from a given set of candidates. This paper proposes a kernel function learning method for KPCA to learn a kernel function tailored to specific data and explores its potential for KPCA-based process monitoring. Motivated by the manifold learning method maximum variance unfolding (MVU), we obtain the kernel function by optimizing over a family of data-dependent kernels such that the nonlinear structure in input data is unfolded in the kernel feature space and gets more likely to be linear there. Using the optimized kernel, the nonlinear principal components of KPCA which are linear principal components in the kernel feature space can effectively capture the variation in data, and thus the data under normal operating conditions can be more precisely modeled by KPCA for process monitoring. Simulation results on an simple nonlinear system and the benchmark Tennessee Eastman (TE) demonstrate that the optimized kernel functions lead to significant improvement in the performance over the popular Gaussian kernels when used in the KPCA-based process monitoring.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.