Abstract

Graph convolutional networks (GCNs) have shown success in many graph-based applications as they can combine node features and graph topology to obtain expressive embeddings. While there exist numerous GCN variants, a typical graph convolution layer uses neighborhood aggregation and fully-connected (FC) layers to extract topological and node-wise features, respectively. However, when the receptive field of GCNs becomes larger, the tight coupling between the number of neighborhood aggregation and FC layers can increase the risk of over-fitting. Also, the FC layer between two successive aggregation operations will mix and pollute features in different channels, bringing noise and making node features hard to converge at each channel. In this article, we explore graph convolution without FC layers. We propose scale graph convolution, a new graph convolution using channel-wise scale transformation to extract node features. We provide empirical evidence that our new method has lower over-fitting risk and needs fewer layers to converge. We show from both theoretical and empirical perspectives that models with scale graph convolution have lower computational and memory costs than traditional GCN models. Experimental results on various datasets show that our method can achieve state-of-the-art results, in a cost-effective fashion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call