Abstract

A non-negative latent factor (NLF) model with a single latent factor-dependent, non-negative and multiplicative update (SLF-NMU) algorithm is frequently adopted to extract useful knowledge from non-negative data represented by high-dimensional and sparse (HiDS) matrices arising from various service-oriented applications. However, its convergence rate is slow. To address this issue, this study proposes a <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">G</u> eneralized Nesterov's acceleration-incorporated, <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</u> on-negative and <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">A</u> daptive <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">L</u> atent <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">F</u> actor (GNALF) model. It results from a) incorporating a generalized Nesterov's accelerated gradient (NAG) method into an SLF-NMU algorithm, thereby achieving an <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</u> AG-incorporated and <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">e</u> lement-oriented <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</u> on-negative (NEN) algorithm to perform efficient parameter update; and b) making its regularization and acceleration parameters self-adaptive via incorporating the principle of a particle swarm optimization algorithm into the training process, thereby implementing a highly adaptive and practical model. Empirical studies on six large sparse matrices from different recommendation service applications show that a GNALF model achieves very high convergence rate without the need of hyper-parameter tuning, making its computational efficiency significantly higher than state-of-the-art models. Meanwhile, such efficiency gain does not result in accuracy loss, since its prediction accuracy is comparable with its peers. Hence, it can better serve practical service applications with real-time demands.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call