Abstract
Graph representation learning is a fundamental research problem for modeling relational data and benefits a number of downstream applications. Traditional Bayesian-based random graph models, such as the stochastic blockmodels (SBMs) and latent space models (LSMs), have proved effective to learn interpretable representations. To leverage both the good interpretability of random graph models and the powerful representation learning ability of deep learning-based methods such as graph neural networks (GNNs), some research proposes deep generative methods by combining the SBMs and GNNs. However, these combined methods have not fully considered the statistical properties of graphs which limits the model interpretability and applicability on directed graphs. To address these limitations in existing research, in this paper, we propose a Deep Latent Space Model (DLSM) for interpretable representation learning on directed graphs, by combining the LSMs and GNNs via a novel “lattice VAE” architecture. The proposed model generates multiple latent variables as node representations to adapt to the structure of directed graphs and improve model interpretability. Extensive experiments on representative real-world datasets demonstrate that our model achieves the state-of-the-art performances on link prediction and community detection with good interpretability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.