Abstract

Graph neural network (GNN) is a general framework for using deep neural networks on graph data. The defining feature of a GNN is that it uses a form of neural message passing where vector messages are exchanged between nodes and updated using neural networks. The message passing operation that underlies GNNs has recently been applied to develop neural approximate inference algorithms, but little work has been done on understanding under what conditions GNNs can be used as a core module for building general inference models. To study this question, we consider the task of out-of-distribution generalization where training and test data have different distributions, by systematically investigating how the graph size and structural properties affect the inferential performance of GNNs. We find that (1) the average unique node degree is one of the key features in predicting whether GNNs can generalize to unseen graphs; (2) the graph size is not a fundamental limiting factor of the generalization in GNNs when the average node degree remains invariant across training and test distributions; (3) despite the size-invariant generalization, training GNNs on graphs of high degree (and of large size consequently) is not trivial (4) neural inference by GNNs outperforms algorithmic inferences especially when the pairwise potentials are strong, which naturally makes the inference problem harder.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call