Abstract

Collaborative Filtering (CF) can be achieved by Matrix Factorization (MF) with high prediction accuracy and scalability. Most of the current MF based recommenders, however, are serial, which prevent them sharing the efficiency brought by the rapid progress in parallel programming techniques. Aiming at parallelizing the CF recommender based on Regularized Matrix Factorization (RMF), we first carry out the theoretical analysis on the parameter updating process of RMF, whereby we can figure out that the main obstacle preventing the model from parallelism is the inter-dependence between item and user features. To remove the inter-dependence among parameters, we apply the Alternating Stochastic Gradient Solver (ASGD) solver to deal with the parameter training process. On this basis, we subsequently propose the parallel RMF (P-RMF) model, of which the training process can be parallelized through simultaneously training different user/item features. Experiments on two large, real datasets illustrate that our P-RMF model can provide a faster solution to CF problem when compared to the original RMF and another parallel MF based recommender.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call