Abstract

Non-negative matrix factorization (NMF) is a typical way for feature extraction in the framework of unsupervised learning and considers a decomposition of targeted matrix into two non-negative matrices. Conventional NMF algorithms use the Euclidean distance or the Kullback–Leibler divergence between matrix components themselves as discrepancy measures. A drawback of these conventional NMF algorithms is lack of robustness against outlier noise and these algorithms sometimes fail to extract latent structure or interpretable information from the matrix when the targeted matrix is contaminated by outlier noise. To solve the problem, we propose robust NMF algorithms by combining a statistical modeling of reconstruction and the $$\gamma $$ -divergence. We theoretically investigate properties of the proposed algorithm such as convergence and robustness, and numerically show validity of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call