Abstract
Sparse subspace learning has been demonstrated to be effective in data mining and machine learning. In this paper, we cast the unsupervised feature selection scenario as a matrix factorization problem from the viewpoint of sparse subspace learning. By minimizing the reconstruction residual, the learned feature weight matrix with the l2,1-norm and the non-negative constraints not only removes the irrelevant features, but also captures the underlying low dimensional structure of the data points. Meanwhile in order to enhance the model's robustness, l1-norm error function is used to resistant to outliers and sparse noise. An efficient iterative algorithm is introduced to optimize this non-convex and non-smooth objective function and the proof of its convergence is given. Although, there is a subtraction item in our multiplicative update rule, we validate its non-negativity. The superiority of our model is demonstrated by comparative experiments on various original datasets with and without malicious pollution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.