Abstract

High-dimensional and sparse (HiDS) matrices generated by recommender systems contain rich knowledge regarding various desired patterns like users' potential preferences and community tendency. Latent factor (LF) analysis proves to be highly efficient in extracting such knowledge from an HiDS matrix efficiently. Stochastic gradient descent (SGD) is a highly efficient algorithm for building an LF model. However, current LF models mostly adopt a standard SGD algorithm. Can SGD be extended from various aspects in order to improve the resultant models' convergence rate and prediction accuracy for missing data? Are such SGD extensions compatible with an LF model? To answer them, this paper carefully investigates eight extended SGD algorithms to propose eight novel LF models. Experimental results on two HiDS matrices generated by real recommender systems show that compared with an LF model with a standard SGD algorithm, an LF model with extended ones can achieve: 1) higher prediction accuracy for missing data; 2) faster convergence rate; and 3) model diversity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call