Abstract

Affiliation number 1 for the first, second, and third authors is incorrect. The correct affiliation is: National Laboratory for Parallel and Distributed Processing, School of Computer Science, National University of Defense Technology, Changsha, Hunan, China.

Highlights

  • nonnegative matrix factorization (NMF) factorizes a given nonnegative data matrix X [Rm|n into two lower-rank nonnegative factor matrices, i.e., W [Rm|r and H[Rr|n, where rvm and rvn

  • To overcome the aforementioned deficiencies of MFGD, motivated by limited memory BFGS (L-BFGS) [14], we propose a limited-memory fast gradient descent (FGD) (L-FGD) method to directly approximate the multiplication of the Hessian inverse and the gradient for the multivariate Newton method in MFGD

  • For efficiently solving our line search problem (7), we develop a limited-memory FGD (L-FGD) method

Read more

Summary

Introduction

NMF factorizes a given nonnegative data matrix X [Rm|n into two lower-rank nonnegative factor matrices, i.e., W [Rm|r and H[Rr|n, where rvm and rvn. It is a powerful dimension reduction method and has been widely used in many fields such as data mining [1] and bioinformatics [2]. To utilize the discriminative information in a dataset, Zafeiriou et al [5] proposed discriminant NMF (DNMF) to incorporate Fisher’s criteria in NMF for classification. Sandler and Lindenbaum [6] proposed an earth mover’s distance metric-based NMF (EMDNMF) to model the distortion of images for image segmentation and texture classification. Guan et al [7] investigated Manhattan NMF (MahNMF) for low-rank and sparse matrix factorization of a nonnegative matrix and developed an efficient algorithm to solve MahNMF

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call