Abstract

Deep learning techniques such as convolutional neural networks (CNNs) have significantly impacted fields like computer vision and other Euclidean data domains. However, many domains have non-Euclidean data, and it is of interest to extend CNNs to leverage the data graph. There has been a surge of interest in the field of geometric deep learning that adapts CNNs to graph signals. As a result, researchers have developed several Graph Neural Network models to address graph classification. There is no clear winner among these models, as their performance depends on the data graph topology. In this paper, we explore the tradeoffs between the graph topology and the architecture of graph neural networks for graph classification on real and synthetic datasets. In particular, we look at 1) network metrics of the graph structures being classified and 2) neural network hyperparameters like the degree of polynomial filter and the number of convolutional layers. Our experimental results show that there is a tradeoff between the performance of graph CNNs (GCNNs) and the graph topology. Simple classifiers based on network metrics and signal statistics may perform better than GCNNs on real datasets and on synthetic datasets when only the graph structure is important.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call