Abstract

Graph convolutional network (GCN) has made remarkable progress in learning good representations from graph-structured data. The layer-wise propagation rule of conventional GCN is designed in such a way that the feature aggregation at each node depends on the features of the one-hop neighbouring nodes. Adding an attention layer over the GCN can allow the network to provide different importance within various one-hop neighbours. These methods can capture the properties of static network, but is not well suited to capture the temporal patterns in time-varying networks. In this work, we propose a temporal graph attention network (TempGAN), where the aim is to learn representations from continuous-time temporal network by preserving the temporal proximity between nodes of the network. First, we perform a temporal walk over the network to generate a positive pointwise mutual information matrix (PPMI) which denote the temporal correlation between the nodes. Furthermore, we design a TempGAN architecture which uses both adjacency and PPMI information to generate node embeddings from temporal network. Finally, we conduct link prediction experiments by designing a TempGAN autoencoder to evaluate the quality of the embedding generated, and the results are compared with other state-of-the-art methods.

Highlights

  • Learning from non-euclidean data [1] has gained a lot of scientific attention in recent years

  • As graph convolutional network being the most prominent and effective method for network embedding, we aim to develop a method based on Graph convolutional network (GCN) which can learn node embeddings by considering the temporal information present in the network

  • 0.147 0.451 we address the problem of temporal network embedding which aims to map the nodes of a network to vector space by preserving the temporal information

Read more

Summary

Introduction

Learning from non-euclidean data [1] has gained a lot of scientific attention in recent years. We follow the attention mechanism as suggested by GAT so to include the hypothesis that different temporal neighbours may contribute differently during the aggregation process Both GCN and GAT consider the edge distribution of the network to be static and are not well suited for representation learning from temporal network whose edge distributions vary over time. They focus on preserving the first-order neighbourhood while generating embeddings. The work provides a methodology to incorporate temporal information into a graph attention network for generating time-aware node embeddings.

Related works
Method
Methodology
Experimental setup
Baseline methods
Result and analysis
Objective
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.