Abstract

Dynamic networks are used in a wide range of fields, including social network analysis, recommender systems, and epidemiology. Representing complex networks as structures changing over time allow network models to leverage not only structural but also temporal patterns. However, as dynamic network literature stems from diverse fields and makes use of inconsistent terminology, it is challenging to navigate. Meanwhile, graph neural networks (GNNs) have gained a lot of attention in recent years for their ability to perform well on a range of network science tasks, such as link prediction and node classification. Despite the popularity of graph neural networks and the proven benefits of dynamic network models, there has been little focus on graph neural networks for dynamic networks. To address the challenges resulting from the fact that this research crosses diverse fields as well as to survey dynamic graph neural networks, this work is split into two main parts. First, to address the ambiguity of the dynamic network terminology we establish a foundation of dynamic networks with consistent, detailed terminology and notation. Second, we present a comprehensive survey of dynamic graph neural network models using the proposed terminology

Highlights

  • The bulk of network science literature focuses on static networks, yet every network existing in the real world changes over time

  • Whereas in the continuous case we have more variety since the node aggregation can no longer be done using traditional graph neural networks (GNNs). Given this definition of representation learning, network models where recurrent neural networks (RNN) are used but network structure is learned using other methods than node aggregation, are not considered dynamic graph neural network (DGNN)

  • We identify two kinds of discrete DGNNs: Stacked DGNNs and Integrated DGNNs

Read more

Summary

INTRODUCTION

The bulk of network science literature focuses on static networks, yet every network existing in the real world changes over time. Whereas in the continuous case we have more variety since the node aggregation can no longer be done using traditional GNNs. Whereas in the continuous case we have more variety since the node aggregation can no longer be done using traditional GNNs Given this definition of representation learning, network models where RNNs are used but network structure is learned using other methods than node aggregation (temporal random walks for example), are not considered DGNNs. The previous section (Section II) introduced a framework for dynamic networks and an overview of dynamic network models. Where f is a neural architecture for temporal modelling (in the methods surveyed f is almost always an RNN but can be self-attention), zti ∈ Rl is the vector representation of node i at time t produced by the GNN, where l is the output dimension of the GNN.

DEEP LEARNING FOR PREDICTION OF NETWORK TOPOLOGY
DECODERS
LOSS FUNCTIONS
Findings
CHALLENGES AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call