Abstract

Nonnegative Matrix Factorization (NMF) is a popular technique for machine learning. Its power is that it can decompose a nonnegative matrix into two nonnegative factors whose product well approximates the nonnegative matrix. However, the nonnegative constraint of the data matrix limits its application. Additionally, the representations learned by NMF fail to respect the intrinsic geometric structure of the data. In this paper, we propose a novel unsupervised matrix factorization method, called Robust Structured Convex Nonnegative Matrix Factorization (RSCNMF). RSCNMF not only achieves meaningful factorizations of the mixed-sign data, but also learns a discriminative representation by leveraging local and global structures of the data. Moreover, it introduces the L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2,1</sub> -norm loss function to deal with noise and outliers, and exploits the L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2,1</sub> -norm feature regularizer to select discriminative features across all the samples. We develop an alternate iterative scheme to solve such a new model. The convergence of RSCNMF is proven theoretically and verified empirically. The experimental results on eight real-world data sets show that our RSCNMF algorithm matches or outperforms the state-of-the-art methods.

Highlights

  • Nonnegative matrix factorization (NMF) focuses on well approximating a high-dimensional nonnegative matrix as the product of two low-dimensional nonnegative factor matrices.The factor learned by NMF is an effective lowdimensional representation of the original data, and has explicit physical meaning

  • We propose a novel unsupervised Robust Structured Convex Nonnegative Matrix Factorization (RSCNMF) to find a discriminative representation by leveraging local and global structures

  • We propose robust structured convex nonnegative matrix factorization (RSCNMF) to learn a robust discriminative representation

Read more

Summary

INTRODUCTION

Nonnegative matrix factorization (NMF) focuses on well approximating a high-dimensional nonnegative matrix as the product of two low-dimensional nonnegative factor matrices. 2) Our algorithm integrates matrix factorization and joint sparse learning into a novel unsupervised framework In such a framework, we apply the L2,1-norm based loss function to characterize a general model for feature learning methods based on matrix factorization, and to improve the robustness of the proposed model in real-world applications. 3) The proposed algorithm devises a new matrix factorization model by reconciling local and global structures, the L2,1-norm based loss function and the L2,1-norm feature regularization. We formulate this new model as an optimization problem solved by a developed iterative multiplicative scheme. The proposed algorithm can be naturally extended to semi-supervised scenario

RELATED WORK
PROBLEM FORMULATION
CONVERGENCE ANALYSIS
COMPUTATIONAL COMPLEXITY
L2-NORM-BASED STRUCTURED CONVEX NONNEGATIVE MATRIX FACTORIZATION
DATA SETS
EXPERIMENTAL SETTING
EXPERIMENTAL RESULTS
Findings
CONCLUSION AND FUTURE WORK

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.