Abstract
Nonnegative latent factor (NLF) models are able to well represent high-dimensional and sparse (HiDS) matrices filled with nonnegative data, which are frequently encountered in industrial applications like recommender systems. Current NLF models mostly adopt the Euclidean distance or Kullback-Leibler divergence as the objective function, which actually correspond to the special case of β=2 or 1 in β-distance functions. With β not limited in such special cases, an NLF model's performance varies, making it highly attractive to investigate the resultant performance variations. We first divide the s-distance-based function into three categories, i.e., β=0, β=1, and β≠0 or 1, respectively. Subsequently, we deduce the nonnegative training rules corresponding for different kinds of objectives to achieve different NLF models. Experimental results on industrial matrices indicate that the frequently adopted cases of β=2 or 1 are probably not able to achieve the most accurate or efficient models. It is promising to further improve the performance of NLF models with carefully-tuned β-distance functions as the training objective.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have