Abstract
Existing popular methods for semi-supervised node classification with high-order convolution improve the learning ability of graph convolutional networks (GCNs) by capturing the feature information from high-order neighborhoods. However, these methods with high-order convolution usually require many parameters and high computational complexity. To address these limitations, we propose HCNP, a new higher-order GCN for semi-supervised node learning tasks, which can simultaneously aggregate information of various neighborhoods by constructing high-order convolution. In HCNP, we reduce the number of parameters using a weight sharing mechanism and combine the neighborhood information via multi-scale neighborhood pooling. Further, HCNP does not require a large number of hidden units, and it fits a few parameters and exhibits low complexity. We show that HCNP matches GCNs in terms of complexity and parameters. Comprehensive evaluations on publication citation datasets (Citeseer, Pubmed, and Cora) demonstrate that the proposed methods outperform MixHop in most cases while maintaining lower complexity and fewer parameters and achieve state-of-the-art performance in terms of accuracy and parameters compared to other baselines.
Highlights
R ECENTLY, the use and popularity of graph convolutional networks (GCNs) have been increasing
To overcome the above two limitations, we propose a new higher-order GCN with multi-scale neighborhood pooling (HCNP) for semi-supervised node classification
GCNs [5] introduce graph Laplacian regularization models [39]–[41] and graph embedding approaches [42]–[44] for semi-supervised node classification. We mainly review these models of GCNs [5], [6], [45], [46] and high-order GCNs [23], [37], [38], [47] for semi-supervised node classification
Summary
R ECENTLY, the use and popularity of graph convolutional networks (GCNs) have been increasing. X. Liu et al.: Higher-order Graph Convolutional Networks with Multi-scale Neighborhood Pooling for Semi-supervised Node Classification to capture feature information of the nodes that are two hops away. To improve the expressive power, DeepGCNs [19] construct a 56-layer graph convolution model using three traditional deep learning techniques: residual connections [20], dense connections [21], and dilated convolutions [22] Designing such layers makes the model complex and leads to many parameters, which results in extreme difficulty in training on large graphs. To overcome the above two limitations, we propose a new higher-order GCN with multi-scale neighborhood pooling (HCNP) for semi-supervised node classification. Comprehensive evaluations on publication citation datasets (Citeseer, Pubmed, Cora) demonstrate that HCNP outperforms benchmark methods in terms of accuracy, complexity, and parameters on semi-supervised classification tasks
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have