Abstract

Graph representation learning aims to represent vertices as low-dimensional and real-valued vectors to facilitate subsequent downstream tasks, i.e., node classification, link predictions. Recently, some novel graph representation learning frameworks, which try to approximate the underlying true connectivity distribution of the vertices, show their superiority. These methods characterize the distance between the true connectivity distribution and generated connectivity distribution by Kullback-Leibler or Jensen-Shannon divergence. However, since these divergences are not continuous with respect to the generator's parameters, such methods easily lead to unstable training and poor convergence. In contrast, Wasserstein distance is continuous and differentiable almost everywhere, which means it can produce more reliable gradient, allowing the training more stable and more convergent. In this paper, we utilize Wasserstein distance to characterize the distance between the underlying true connectivity distribution and generated distribution in graph representation learning. Experimental results show that the accuracy of our method exceeds existing baselines in tasks of both node classification and link prediction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.