Abstract

The structural change toward the digital transformation of online sales elevates the importance of parallel processing techniques in recommender systems, particularly in the pandemic and post-pandemic era. Matrix factorization (MF) is a popular and scalable approach in collaborative filtering (CF) to predict user preferences in recommender systems. Researchers apply Stochastic Gradient Descent (SGD) as one of the most famous optimization techniques for MF. Paralleling SGD methods help address Big Data challenges due to the wide range of products and the sparsity in user ratings. However, these methods’ convergence rate and accuracy are affected by the dependency between the user and item latent factors, specifically in large-scale problems. Besides, the performance is sensitive to the applied learning rates. This paper proposes a new parallel method to remove dependencies and boost speed-up by using fractional calculus to improve accuracy and convergence rate. We also apply adaptive learning rates to enhance the performance of our proposed method. The proposed method is based on Compute Unified Device Architecture (CUDA) platform. We evaluate the performance of our proposed method using real-world data and compare the results with the close baselines. The results show that our method can obtain high accuracy and convergence rate in addition to high parallelism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call