Abstract

Graph neural networks (GNNs) have become a popular choice for analyzing graph data in the last few years, and characterizing their expressiveness has become an active area of research. One popular measure of expressiveness is the number of linear regions in neural networks with piecewise linear activations. In this paper, we present estimates for the number of linear regions in classic graph convolutional networks (GCNs) with one layer and multiple-layer scenarios and ReLU activation function. We derive an optimal upper bound for the maximum number of linear regions for one-layer GCNs and upper and lower bounds for multi-layer GCNs. Our simulated results suggest that the true maximum number of linear regions is likely closer to our estimated lower bound. These findings indicate that multi-layer GCNs have exponentially greater expressivity than one-layer GCNs per parameter, implying that deeper GCNs are more expressive than their shallow counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call