Abstract
As a fundamental task, link prediction has pervasive applications in social networks, webpage networks, information retrieval and bioinformatics. Among link prediction methods, latent variable models, such as relational topic model and its variants, which jointly model both network structure and node attributes, have shown promising performance for predicting network structures and discovering latent representations. However, these methods are still limited in their representation learning capability from high-dimensional data or consider only text modality of the content. Thus they are very limited in current multimedia scenario. This paper proposes a Bayesian deep generative model called relational variational autoencoder (RVAE) that considers both links and content for link prediction in the multimedia scenario. The model learns deep latent representations from content data in an unsupervised manner, and also learns network structures from both content and link information. Unlike previous deep learning methods with denoising criteria, the proposed RVAE learns a latent distribution for content in latent space, instead of observation space, through an inference network, and can be easily extended to multimedia modalities other than text. Experiments show that RVAE is able to significantly outperform the state-of-the-art link prediction methods with more robust performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.