Abstract

High-dimensional and Sparse (HiDS) data generated by recommender systems (RSs) contain rich knowledge regarding users’ potential preferences. A Latent factor analysis (LFA) model enables efficient extraction of essential features from such data. However, an LFA model relies heavily on its hyper-parameters like learning rate and regularization coefficient, which must be chosen with care. However, traditional grid-search-based manual tuning is extremely time-consuming and computationally expensive. To address this issue, this study proposes a hyper-parameter-evolutionary latent factor analysis (HLFA) model. Its main idea is to build a swarm by taking the hyper-parameters of every single LFA-based model as particles, and then apply particle swarm optimization (PSO) to make its both hyper-parameters, i.e., the learning rate and regularization coefficient, self-adaptive according to a pre-defined fitness function. Experimental results on six HiDS matrices from real RSs indicate that an HLFA model outperforms several state-of-the-art LF models in terms of computational efficiency, and most importantly, without loss of prediction accuracy for missing data of an HiDS matrix.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call