Abstract

Recommending appropriate tags to items can facilitate content organization, retrieval, consumption, and other applications, where hybrid tag recommender systems have been utilized to integrate collaborative information and content information for better recommendations. In this article, we propose a multi-auxiliary augmented collaborative variational auto-encoder (MA-CVAE) for tag recommendation, which couples item collaborative information and item multi-auxiliary information, i.e., content and social graph, by defining a generative process. Specifically, the model learns deep latent embeddings from different item auxiliary information using variational auto-encoders (VAE), which could form a generative distribution over each auxiliary information by introducing a latent variable parameterized by deep neural network. Moreover, to recommend tags for new items, item multi-auxiliary latent embeddings are utilized as a surrogate through the item decoder for predicting recommendation probabilities of each tag, where reconstruction losses are added in the training phase to constrain the generation for feedback predictions via different auxiliary embeddings. In addition, an inductive variational graph auto-encoder is designed to infer latent embeddings of new items in the test phase, such that item social information could be exploited for new items. Extensive experiments on MovieLens and citeulike datasets demonstrate the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call