Abstract

The convex nonnegative matrix factorization (CNMF) is a variation of nonnegative matrix factorization (NMF) in which each cluster is expressed by a linear combination of the data points and each data point is represented by a linear combination of the cluster centers. When there exists nonlinearity in the manifold structure, both NMF and CNMF are incapable of characterizing the geometric structure of the data. This paper introduces a neighborhood preserving convex nonnegative matrix factorization (NPCNMF), which imposes an additional constraint on CNMF that each data point can be represented as a linear combination of its neighbors. Thus our method is able to reap the benefits of both nonnegative data factorization and the purpose of manifold structure. An efficient multiplicative updating procedure is produced, and its convergence is guaranteed theoretically. The feasibility and effectiveness of NPCNMF are verified on several standard data sets with promising results.

Highlights

  • This nonnegative matrix factorization (NMF) [1, 2] has been widely used in information retrieval, computer vision, pattern recognition, and DNA gene expressions [3, 4]

  • We introduce a novel matrix factorization algorithm, called neighborhood preserving convex nonnegative matrix factorization (NPCNMF) which is based on the assumption that if a data point can be reconstructed from its neighbors in the input space, it can be reconstructed from its neighbors by the same reconstruction coefficients in the low dimensional subspace, that is, local linear embedding assumption [20]

  • We show the performance of the proposed method on face recognition and compare our proposed method with the popular subspace learning algorithms: four unsupervised ones which are principal component analysis [21] (PCA), neighborhood preserving embedding (NPE) [20], local nonnegative matrix factorization (LNMF) [4], and convex nonnegative factorization (CNMF) [13] the one supervised algorithm and which is linear discriminant analysis (LDA) [21]

Read more

Summary

Introduction

This nonnegative matrix factorization (NMF) [1, 2] has been widely used in information retrieval, computer vision, pattern recognition, and DNA gene expressions [3, 4]. NMF decomposes the data matrix as the product of two matrices that possess only nonnegative elements. It has been stated by many researchers that there are a lot of favorable properties for such a decomposition over other similar decompositions, such as PCA. One of the most useful properties of NMF is that it usually leads to parts-based representation because it allows only additive, not subtractive, combinations. Such a representation encodes much of the data making them easy to interpret. All the methods mentioned above are unsupervised, Wang et al; [10] and Zafeiriou et al [11] proposed independently the Fisher-NMF, which was further studied by Kotsia et al [12], by adding an additional constraint seeking to maximize the between-class scatter and minimize the within-class scatter in the subspace spanned by the bases

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call