Abstract

The multiple linear model is used successfully to extend the linear model to nonlinear problems. However, the conventional multilinear models fail to learn the global structure of a training data set because the local linear models are independent of each other. Furthermore, the local linear transformations are learned in the original space. Therefore, the performance of multilinear methods is strongly dependent on the results of partition. This paper presents a kernel approach for the implementation of the local linear discriminant analysis for face recognition problems. In the original space, we utilize a set of local linear transformations with interpolation to approximate an optimal global nonlinear transformation. Based on the local linear models in the original space, we derive an explicit kernel mapping to map the training data into a high‐dimensional transformed space. The optimal transformation is learned globally in the transformed space. Experimental results show that the proposed method is more robust to the partition results than the conventional multilinear methods. Compared with the general nonlinear kernels that utilize a black‐box mapping, our proposed method can reduce the negative effects caused by the potential overfitting problem. © 2016 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call