Abstract
High-dimensionality is a common characteristic of real-world data, which often results in high time and space complexity or poor performance of ensuing methods. Subspace learning, as one kind of dimension reduction method, provides a way to overcome the aforementioned problem. In this paper, we introduce multiobjective evolutionary optimization into subspace learning, and propose a Pareto-based sparse subspace learning algorithm for classification tasks. The proposed algorithm aims at minimizing two conflicting objective functions, the reconstruction error and the sparsity. A kernel trick derived from Gaussian kernel is implemented to the sparse subspace learning for the nonlinear phenomena of nature. In order to speed up the convergence, an entropy-driven initialization scheme and a gradient-descent mutation scheme are designed specifically. At last, a knee point is selected from the Pareto front to guarantee that we can obtain a solution with good classification performance, and yet as sparse as possible. The experiments and detailed analysis on real-life datasets and the hyperspectral images demonstrated that the proposed model achieves comparable results with the existing conventional subspace learning and evolutionary feature selection algorithms. Hence, this paper provides a more flexible and efficient approach for sparse subspace learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.