Abstract

The graph regularized nonnegative matrix factorization (GNMF) algorithms have received a lot of attention in the field of machine learning and data mining, as well as the square loss method is commonly used to measure the quality of reconstructed data. However, noise is introduced when data reconstruction is performed; and the square loss method is sensitive to noise, which leads to degradation in the performance of data analysis tasks. To solve this problem, a novel graph regularized sparse NMF (GSNMF) is proposed in this article. To obtain a cleaner data matrix to approximate the high-dimensional matrix, the l₁-norm to the low-dimensional matrix is added to achieve the adjustment of data eigenvalues in the matrix and sparsity constraint. In addition, the corresponding inference and alternating iterative update algorithm to solve the optimization problem are given. Then, an extension of GSNMF, namely, graph regularized sparse nonnegative matrix trifactorization (GSNMTF), is proposed, and the detailed inference procedure is also shown. Finally, the experimental results on eight different datasets demonstrate that the proposed model has a good performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call