Graph neural networks (GNNs) have gained significant attention for their ability to learn representations from graph-structured data, in which message passing and feature fusion strategies play an essential role. However, traditional Graph Neural Architecture Search (GNAS) mainly focuses on optimization with a static perceptive field to ease the search process. To efficiently utilize latent relationships between non-adjacent nodes as well as edge features, this work proposes a novel two-stage approach that is able to optimize GNN structures more effectively by adaptively aggregating neighborhood features in multiple scales. This adaptive multi-scale GNAS is able to assign optimal weights for different neighbors in different graphs and learning tasks. In addition, it takes latent relationships and edge features into message passing into account, and can incorporate different feature fusion strategies. Compared with traditional ones, our proposed approach can explore a much larger and more diversified search space efficiently. We also prove that traditional multi-hop GNNs are low-pass filters, which can lead to the removal of important low-frequency components of signals from remote neighbors in a graph, and they are not even expressive enough to distinguish some simple regular graphs, justifying the superiority of our approach. Experiments with seven datasets across three graph learning tasks, including graph regression, node classification, and graph classification, demonstrate that our method yields significant improvement compared with state-of-the-art GNAS approaches and human-designed GNN approaches. Specifically, for example, with our framework, the MAE of the 12-layer AM-GNAS was 0.102 for the ZINC dataset, yielding over 25% improvement.
Read full abstract