Abstract
The graph regularized nonnegative matrix factorization (GNMF) algorithms have received a lot of attention in the field of machine learning and data mining, as well as the square loss method is commonly used to measure the quality of reconstructed data. However, noise is introduced when data reconstruction is performed; and the square loss method is sensitive to noise, which leads to degradation in the performance of data analysis tasks. To solve this problem, a novel graph regularized sparse NMF (GSNMF) is proposed in this article. To obtain a cleaner data matrix to approximate the high-dimensional matrix, the l₁-norm to the low-dimensional matrix is added to achieve the adjustment of data eigenvalues in the matrix and sparsity constraint. In addition, the corresponding inference and alternating iterative update algorithm to solve the optimization problem are given. Then, an extension of GSNMF, namely, graph regularized sparse nonnegative matrix trifactorization (GSNMTF), is proposed, and the detailed inference procedure is also shown. Finally, the experimental results on eight different datasets demonstrate that the proposed model has a good performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.