Abstract

Convolutional neural networks (CNNs) restrict the, otherwise arbitrary, linear operation of neural networks to be a convolution with a bank of learned filters. This makes them suitable for learning tasks based on data that exhibit the regular structure of time signals and images. The use of convolutions, however, makes them unsuitable for processing data that do not exhibit such a regular structure. Graph signal processing (GSP) has emerged as a powerful alternative to process signals whose irregular structure can be described by a graph. Central to GSP is the notion of graph convolutional filters which can be used to define convolutional graph neural networks (GNNs). In this paper, we show that the graph convolution can be interpreted as either a diffusion or aggregation operation. When combined with nonlinear processing, these different interpretations lead to different generalizations which we term selection and aggregation GNNs. The selection GNN relies on linear combinations of signal diffusions at different resolutions combined with node-wise non-linearities. The aggregation GNN relies on linear combinations of neighborhood averages of different depth. Instead of node-wise nonlinearities, the nonlinearity in aggregation GNNs is pointwise on the different aggregation levels. Both of these models particularize to regular CNNs when applied to time signals but are different when applied to arbitrary graphs. Numerical evaluations show different levels of performance for selection and aggregation GNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call