Abstract

The recommender systems have long been studied in the literature. The collaborative filtering is one of the most widely adopted recommendation techniques which is usually applied to the explicit data, e.g., rating scores. However, the implicit data, e.g., click data, is believed to be able to discover user’s latent preferences. Consequently, a number of research attempts have been made toward this issue. To the best of our knowledge, this paper is the first attempt to adapt the Wasserstein autoencoders to collaborative filtering problem. Particularly, we propose a new loss function by introducing an $$L_1$$ regularization term to learn a sparse low-rank representation form to represent latent variables. Then, we carefully design (1) a new cost function to minimize the data reconstruction error, and (2) the appropriate distance metrics for the calculation of KL divergence between the learned distribution of latent variables and the underlying true data distribution. Rigorous experiments are performed on three widely adopted datasets. Both the state-of-the-art approaches, e.g., Mult-VAE and Mult-DAE, and the baseline models are evaluated on these datasets. The promising experimental results demonstrate that the proposed approach is superior to the compared approaches with respect to evaluation criteria Recall@R and NDCG@R.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call