Abstract

In this paper, we first analyze the current research status of the dimension reduction algorithm, especially the subspace learning. We find that the global subspace learning algorithm is based on the assumption that the data is distributed in the same space, so that the samples contained in different subspaces are not well represented. In order to overcome this shortcoming, Researchers have proposed the methods of local linear subspace learning to model the manifold. But no matter what kind of algorithms we use, we must determine whether the samples are contained in different subspaces. If samples are contained in different subspaces, the number of subspaces often needs to be given. It greatly restricts the flexibility and performance of these algorithms. For the above problems, inspired by activation function in neural network, an adaptive local subspace learning method is proposed in this paper. Meanwhile the method of similarity calculation between samples is given, so that the subspace learning of the sample doesn’t need to consider whether samples exists in a number of different subspaces. Through experiments of performance analysis, it can be proved that the subspace learned by the algorithm can be better at representing the adjacency relation among samples in manifold. Experiments of recognition and reconstruction on face database verify the effectiveness and robustness of the proposed feature extraction method on high-dimensional data samples

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.