Abstract

Projective non-negative matrix factorization (PNMF) projects high-dimensional non-negative examples X onto a lower-dimensional subspace spanned by a non-negative basis W and considers WT X as their coefficients, i.e., X≈WWT X. Since PNMF learns the natural parts-based representation Wof X, it has been widely used in many fields such as pattern recognition and computer vision. However, PNMF does not perform well in classification tasks because it completely ignores the label information of the dataset. This paper proposes a Discriminant PNMF method (DPNMF) to overcome this deficiency. In particular, DPNMF exploits Fisher's criterion to PNMF for utilizing the label information. Similar to PNMF, DPNMF learns a single non-negative basis matrix and needs less computational burden than NMF. In contrast to PNMF, DPNMF maximizes the distance between centers of any two classes of examples meanwhile minimizes the distance between any two examples of the same class in the lower-dimensional subspace and thus has more discriminant power. We develop a multiplicative update rule to solve DPNMF and prove its convergence. Experimental results on four popular face image datasets confirm its effectiveness comparing with the representative NMF and PNMF algorithms.

Highlights

  • Dimension reduction uncovers the low-dimensional structures hidden in the high-dimensional data and gets rid of the data redundancy, and significantly enhance the performance and reduce the subsequent computational cost

  • Discriminant Projective non-negative matrix factorization (PNMF) Above analysis gives us two observations on negative matrix factorization (NMF) and its extensions: 1) both NMF and Discriminant NMF (DNMF) suffer from the out-ofsample deficiency, and 2) PNMF overcomes the out-ofsample deficiency, it does not utilize the label information in a dataset

  • This paper proposes an effective Discriminant Projective Nonnegative Matrix Factorization (DPNMF) method to overcome the out-of-sample deficiency of NMF and boost its discriminant power by incorporating the label information in a dataset based on Fisher’s criterion

Read more

Summary

Introduction

Dimension reduction uncovers the low-dimensional structures hidden in the high-dimensional data and gets rid of the data redundancy, and significantly enhance the performance and reduce the subsequent computational cost. Dimension reduction has been widely used in many areas such as pattern recognition and computer vision. Some data such as image pixels and video frames are non-negative, but conventional dimension reduction approaches like principal component analysis (PCA, [1]) and Fisher’s linear discriminant analysis (FLDA, [2]) do not maintain such non-negativity property, and lead to a holistic representation which is inconsistent with the intuition of learning parts to form a whole. Guan et al [43][44] proposed a Nonnegative Patch Alignment Framework (NPAF) that incorporates marginmaximization based discriminative information into NMF. Guan et al [42] extended NMF to a novel low-rank and sparse matrix decomposition method termed Manhattan

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.