Abstract

Non-negative matrix factorization is a relatively new method of matrix decomposition which factors an m×n data matrix X into an m×k matrix W and a k×n matrix H, so that X≈W×H. Importantly, all values in X, W, and H are constrained to be non-negative. NMF can be used for dimensionality reduction, since the k columns of W can be considered components into which X has been decomposed. The question arises: how does one choose k? In this paper, we first assess methods for estimating k in the context of NMF in synthetic data. Second, we examine the effect of normalization on this estimate’s accuracy in empirical data. In synthetic data with orthogonal underlying components, methods based on PCA and Brunet’s Cophenetic Correlation Coefficient achieved the highest accuracy. When evaluated on a well-known real dataset, normalization had an unpredictable effect on the estimate. For any given normalization method, the methods for estimating k gave widely varying results. We conclude that when estimating k, it is best not to apply normalization. If underlying components are known to be orthogonal, then Velicer’s MAP or Minka’s Laplace-PCA method might be best. However, when orthogonality of the underlying components is unknown, none of the methods seemed preferable.

Highlights

  • Matrix decomposition methods [1,2,3] are an important area of study in mathematics, and encompass approaches to factoring an observed matrix into a mixture of other matrices

  • Okun and Priisalu showed that normalization can sometimes reduce the time required to compute the negative matrix factorization (NMF) [21] when using Lee and Seung’s original recurrence relation (Equations (2)–(4)) [16]), and with W and H initialized with non-negative random values

  • We explored this question by simulating signal mixtures and testing various matrix decomposition methods on them to estimate the number of underlying components

Read more

Summary

Introduction

Matrix decomposition methods [1,2,3] are an important area of study in mathematics, and encompass approaches to factoring an observed matrix into a mixture of other matrices. An abrupt decrease in the value of this determinant (plotted as a function of k) indicates the best estimate of the underlying components k; Fogel and Young use the algorithm of Zhu and Ghodsi [42], originally developed to automate Cattell’s scree test [29], to detect this abrupt decrease This volume-based method is based on the geometric interpretation of the determinant of an N × N matrix as the volume of a N-dimensional parallelepiped Okun and Priisalu showed that normalization can sometimes reduce the time required to compute the NMF [21] when using Lee and Seung’s original recurrence relation (Equations (2)–(4)) [16]), and with W and H initialized with non-negative random values This raises the question whether normalization might affect the estimate of the number of underlying components k. Lin’s method [24] was used to compute the NMF [47]

Materials and Methods
Iterative Methods
Effects of Normalization
Findings
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.