Abstract

The high complexity of graph neural networks (GNNs) on large-scale networks hinders their industrial application. Graph condensation (GCond) was recently proposed to condense the original large-scale graph into a small-scale one to address the problem. The goal is to make GNNs trained on the condensed graph perform similarly to those trained on the original graph. GCond achieves satisfactory performance on some datasets. However, GCond uses a single fully connected graph to model the condensed graph, which limits the diversity of embeddings obtained, especially when there are few synthetic nodes. We propose Multiple Sparse Graphs Condensation (MSGC), which condenses the original large-scale graph into multiple small-scale sparse graphs. MSGC takes standard neighborhood patterns as the essential substructures and can construct various connection schemes. Correspondingly, GNNs can obtain numerous sets of embeddings, which significantly enriches the diversity of embeddings. Experiments show that, compared with GCond and other baselines, MSGC has significant advantages at the same condensed graph scale. MSGC can retain nearly 100% performance on Flickr and Citeseer datasets while reducing their graph scale by over 99.0%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call