Abstract

Fast optimization strategies are frequently used to repeatedly solve the Non-negative Dictionary Update (NDU) problem in many well-known intelligent computing algorithms, e.g., Non-negative Matrix Factorization (NMF) and its variants for different applications. However, few comparisons are available on their rationale, efficiency and practical performance. In view of this, we present a theoretical and experimental comparison of four representative fast NDU strategies, i.e., the Multiplicative Update rules, the Rank-one Residue Iteration (RRI) method, the Nesterov’s Optimal Gradient method and the Modified RRI (MR-RI) method. In the theoretical part, we compare their procedures and flop-counts per iteration, which indicate different rationales but similar efficiency. Furthermore, in the experimental part, we compare the four NDU strategies based Graph regularized NMF (GNMF) algorithms in terms of image clustering performance, convergence, efficiency and sparseness of dictionary. Under our experimental setting, it can be observed that (i) the column-by-column update way of RRI and MRRI is more time-consuming in practice, (ii) when achieving similar clustering performance, MRRI based GNMF converges to a stationary point with a much larger objective value than the other three, and (iii) MRRI based GNMF always generates dictionaries without interpretability, and it learns almost single-pixel based image representations, which are, however, discriminative.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call