Abstract

Non-negative Matrix Factorization (NMF) is a popular model in machine learning, which can learn parts-based representation by seeking for two non-negative matrices whose product can best approximate the original matrix. However, the manifold structure is not considered by NMF and many of the existing work use the graph Laplacian to ensure the smoothness of the learned representation coefficients on the data manifold. Further, beyond smoothness, it is suggested by recent theoretical work that we should ensure second order smoothness for the NMF mapping, which measures the linearity of the NMF mapping along the data manifold. Based on the equivalence between the gradient field of a linear function and a parallel vector field, we propose to find the NMF mapping which minimizes the approximation error, and simultaneously requires its gradient field to be as parallel as possible. The continuous objective function on the manifold can be discretized and optimized under the general NMF framework. Extensive experimental results suggest that the proposed parallel field regularized NMF provides a better data representation and achieves higher accuracy in image clustering.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.