Abstract

Sparse representation is a useful tool in machine learning and pattern recognition area. Sparse graphs (graphs constructed using sparse representation of data) proved to be very informative graphs for many learning tasks such as label propagation, embedding, and clustering. It has been shown that constructing an informative graph is one of the most important steps since it significantly affects the final performance of the post graph-based learning algorithm. In this paper, we introduce a new sparse graph construction method that integrates manifold constraints on the unknown sparse codes as a graph regularizer. These constraints seem to be a natural regularizer that was discarded in existing state-of-the art graph construction methods. This regularizer imposes constraints on the graph coefficients in the same way a locality preserving constraint imposes on data projection in non-linear manifold learning. The proposed method is termed Sparse Graph with Laplacian Smoothness (SGLS). We also propose a kernelized version of the SGLS method. A series of experimental results on several public image datasets show that the proposed methods can out-perform many state-of-the-art methods for the tasks of label propagation, nonlinear and linear embedding.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.