Abstract

Manufacturing industries collect massive amounts of multivariate measurement through automated inspection processes. Noisy measurements and high-dimensional, irrelevant features make it difficult to identify useful patterns in the data. Principal component analysis provides linear summaries of datasets with fewer latent variables. Kernel Principal Component Analysis (KPCA), however, identifies nonlinear patterns. One challenge in KPCA is to inverse map the denoised signal from a high-dimensional feature space into its preimage in input space to visualize the nonlinear variation sources. However, such an inverse map is not always defined. This article provides a new meta-method applicable to any KPCA algorithm to approximate the preimage. It improves upon previous work where a strong assumption was the availability of noise-free training data. This is problematic for applications such as manufacturing variation analysis. To attenuate noise in kernel subspace estimation the final preimage is estimated as the average from bagged samples drawn from the original dataset. The improvement is most pronounced when the parameters differ from those that minimize the error rate. Consequently, the proposed approach improves the robustness of any base KPCA algorithm. The usefulness of the proposed method is demonstrated by analyzing a classic handwritten digit dataset and a face dataset. Significant improvement over the existing methods is observed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call