Abstract

Feature selection, selecting the most informative subset of features, is an important research direction in dimension reduction. The combinatorial search in feature selection is essentially a binary optimization problem, known as NP hard, which can be alleviated by learning feature weights. Traditional feature weights algorithms rely on heuristic search path. These approaches neglect the interaction and dependency between different features, and thus provide no guarantee for optimality. In this paper, we propose a novel joint feature weights learning framework, which imposes both nonnegative and $\ell _{2,1}$ -norm constraints on the feature weights matrix. The nonnegative property ensures the physical significance of learned feature weights. Meanwhile, $\ell _{2,1}$ -norm minimization achieves joint selection of the most relevant features by exploiting the whole feature space. More importantly, an efficient iterative algorithm with proved convergence is designed to optimize a convex objective function. Using this framework as a platform, we propose new supervised and unsupervised joint feature selection methods. Particularly, in the proposed unsupervised method, nonnegative graph embedding is developed to exploit intrinsic structure in the weighted space. Comparative experiments on seven real-world data sets indicate that our framework is both effective and efficient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.