Nonnegative matrix factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R + M × R and X ∈ R + R × T such that Y ≅ AX , given only Y ∈ R M × T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation (BSS), spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or Kullback–Leibler divergence. This approach has been used in the widely known Lee–Seung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is well known that these algorithms, in spite of their low complexity, are slowly convergent, give only a strictly positive solution, and can easily fall into local minima of a nonconvex cost function. In this paper, we propose to take advantage of the second-order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasi-Newton method is presented, where a regularized Hessian with the Levenberg–Marquardt approach is inverted with the Q-less QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the gradient projection conjugate gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The gradient projection (GP) method is exploited to find zero-value components (active), and then the Newton steps are taken only to compute positive components (inactive) with the conjugate gradient (CG) method. As a cost function, we used the α -divergence that unifies many well-known cost functions. We applied our new NMF method to a BSS problem with mixed signals and images. The results demonstrate the high robustness of our method.
Read full abstract