Abstract
Unsupervised graph representation learning is a challenging task that embeds graph data into a low-dimensional space without label guidance. Recently, graph autoencoders have been proven to be an effective way to solve this problem in some attributed networks. However, most existing graph autoencoder-based embedding algorithms only reconstruct the feature maps of nodes or the affinity matrix but do not fully leverage the latent information encoded in the low-dimensional representation. In this study, we propose a dual-decoder graph autoencoder model for attributed graph embedding. The proposed framework embeds the graph topological structure and node attributes into a compact representation, and then the two decoders are trained to reconstruct the node attributes and graph structures simultaneously. The experimental results on clustering and link prediction tasks strongly support the conclusion that the proposed model outperforms the state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.