Abstract

Embedded feature selection approach guides subsequent projection matrix (selection matrix) learning through the acquisition of pseudolabel matrix to conduct feature selection tasks. Yet the continuous pseudolabel matrix learned from relaxed problem based on spectral analysis deviates from reality to some extent. To cope with this issue, we design an efficient feature selection framework inspired by classical least-squares regression (LSR) and discriminative K-means (DisK-means), which is called the fast sparse discriminative K-means (FSDK) for the feature selection method. First, the weighted pseudolabel matrix with discrete trait is introduced to avoid trivial solution from unsupervised LSR. On this condition, any constraint imposed into pseudolabel matrix and selection matrix is dispensable, which is significantly beneficial to simplify the combinational optimization problem. Second, the l2,p -norm regularizer is introduced to satisfy the row sparsity of selection matrix with flexible p . Consequently, the proposed FSDK model can be treated as a novel feature selection framework integrated from the DisK-means algorithm and l2,p -norm regularizer to optimize the sparse regression problem. Moreover, our model is linearly correlated with the number of samples, which is speedy to handle the large-scale data. Comprehensive tests on various data terminally illuminate the effectiveness and efficiency of FSDK.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.