Abstract

Graph Auto-Encoder(GAE) emerged as a powerful node embedding method, has attracted extensive interests lately. GAE and most of its extensions rely on a series of encoding layers to learn effective node embeddings, while corresponding decoding layers trying to recover the original features. Promising performances on challenging tasks have demonstrated GAE’s powerful ability of representation. On the other hand, Subgraph Convolutional Networks(SCNs), as an extension of Graph Convolutional Networks(GCNs), can aggregate both tagged and local structural features in an artful way. In this paper, we show that SCNs can be improved (AttSCNs) by an attention mechanism to acquire better representational capability, which is competent for the duty of encoder. Then we develop inversed AttSCNs and propose a novel auto-encoder, i.e., Attention-Based Auto-Encoder(ABAE). This architecture utilizes attention mechanism to get insight of the data. We perform experiments on some challenging tasks to show the effectiveness of our models. Moreover, we construct AttSCNs for Node Classification. The results demonstrate that AttSCNs can produce considerable embeddings. Furthermore, we launch Link Prediction task for the proposed ABAE. Experimental results show that our ABAE has its fantastic power and achieves state-of-the-art in Link Prediction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.