Abstract
Graph representation learning or graph embedding is a classical topic in data mining. Current embedding methods are mostly non-parametric, where all the embedding points are unconstrained free points in the target space. These approaches suffer from limited scalability and an over-flexible representation. In this paper, we propose a parametric graph embedding by fusing graph topology information and node content information. The embedding points are obtained through a highly flexible non-linear transformation from node content features to the target space. This transformation is learned using the contrastive loss function of the siamese network to preserve node adjacency in the input graph. On several benchmark network datasets, the proposed GraPASA method shows a significant margin over state-of-the-art techniques on benchmark graph representation tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.