Abstract

Kernel partial least squares (KPLS) is potentially very efficient for tackling nonlinear systems by mapping an original input space into a high-dimensional feature space. Unlike other nonlinear modeling technique, KPLS does not involve any nonlinear optimization procedure and possesses low computational complexity similar to that of linear partial least squares (PLS). But when there are some outliers in the training data set, KPLS regression will have poor properties due to the sensitiveness of PLS to outliers. Under this circumstance, a more robust regression method such as partial robust M-regression (PRM) can be used instead of PLS in the nonlinear kernel-based algorithm. In this paper, a kernel partial robust M-regression (KPRM) is presented. Nonlinear structure in the original input space is transformed into linear one in the high-dimensional feature space; and through choosing appropriately weighting strategy, the proposed method becomes entirely robust to two types of outliers. The prediction performance of KPRM is compared to those of PLS, PRM, and KPLS using three examples; KPRM yields much lower prediction errors for these three data sets, and the loss in efficiency to be paid for is very small.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.