Message-passing Graph Neural Networks (GNNs) are remarkably successful in graph representation learning. Most of them propagate and aggregate information within fixed-size receptive-fields (RFs) for all nodes. In this paper, our empirical studies discover that different-sized RFs are often desired for distinct graphs and target nodes. To this end, we propose a dynamic selection mechanism by allowing each node to adaptively adjust the size of its RFs according to the graph properties. In particular, we design an architectural unit called Selective-Hop (SH) block, in which information from neighborhoods within different hops are merged using dual attention mechanisms: (i) node-wise attention on each network branch to capture mutual information between the subgraphs and the entire graph; and (ii) channel-wise attention on different network branches to detect the significance of different hops at the node-level granularity. As such, the SH block determines neighbors flexibly without the constraints of hops. By stacking several SH blocks in GNN style, we instantiate two new GNN variants, SHGCN and SHSGC. Empirical results on eight benchmark datasets for semi-supervised node classification reveal that our new models achieve a considerable improvement over the state-of-the-art methods under low-homophily or heterophily and maintain competitive performance under homophily.
Read full abstract