Abstract

Kernel principal component analysis (KPCA) has become a popular technique for process monitoring in recent years. However, the performance largely depends on kernel function, yet methods to choose an appropriate kernel function among infinite ones have only been sporadically touched in the research literatures. In this paper, a novel methodology to learn a data-dependent kernel function automatically from specific input data is proposed and the improved kernel principal component analysis is obtained through using the data-dependent kernel function in traditional KPCA. The learning procedure includes two parts: learning a kernel matrix and approximating a kernel function. The kernel matrix is learned via a manifold learning method named maximum variance unfolding (MVU) which considers underlying manifold structure to ensure that principal components are linear in kernel space. Then, a kernel function is approximated via generalized Nystrom formula. The effectiveness of the improved KPCA model is confirmed by a numerical simulation and the Tennessee Eastman (TE) process benchmark.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.