Abstract

In graph embedding learning with unsupervised feature selection fields, similarity matrices are usually obtained only from the initial noise-laden samples, and the probable association between various features is often ignored. To tackle the issues described above, this article proposes an effective unsupervised group feature selection approach via non-convex regularized graph embedding and self-representation (NLGMS). NLGMS integrates the similarity matrix and feature selection as a joint system on the basis of graph. It combines self-representation learning into the system to preserve the global structure and learns the nonconvex projection matrix to direct the procedure of feature selection. Specifically, first, NLGMS presents a unified framework for global structure learning, local structure learning, and feature selection process so the data’s intrinsic structure is excellently captured and learned adaptively. Second, nonconvex ℓ2,0-norm is enforced upon the projection matrix to perform the process of feature selection, ensuring that the optimal feature subset can be picked precisely. Nonconvex ℓ2,0-norm constraint considers the probable association between various features so that it can deliver optimal performance by considering the feature subset as an integral whole, which other convex regularized constraints often fail to achieve. Finally, the introduction to self-representation learning can facilitate the reconstruction of samples in low-rank subspaces, strengthening the model’s robustness. To tackle this challenging nonconvex model, an innovative alternative optimization technique is exploited. In addition, to validate the superiority of NLGMS, exhaustive experiments are performed on one synthetic dataset, six benchmark datasets, and one fish image dataset. The source code is available at: https://github.com/hrbai/NLGMS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call