Abstract

Distribution shift widely exists in graph representation learning and often reduces model performance. This work investigates how to improve the performance of a graph neural network (GNN) in a single graph by controlling distribution shift between embedding spaces. Specifically, we provide an upper error-bound estimation, which quantitatively analyzes how distribution shift affects GNNs' performance in a single graph. Considering that there is no natural domain division in a single graph, we propose PW-GNN to simultaneously learn discriminative embedding and reduce distribution shift. PW-GNN measures distribution discrepancy using the distance between test embeddings and prototypes, and transfers minimizing distribution shift to minimizing the power of Wasserstein distance, which is introduced into GNNs as a regularizer. A series of theoretical analyses are carried out to demonstrate the effectiveness of PW-GNN. Besides, a low-complexity training algorithm is designed by exploring entropy-regularized strategy and block coordinate descent method. Extensive numerical experiments are conducted on different datasets with both biased and unbiased splits. We empirically test our model equipped with four backbone models. Results show that PW-GNN outperforms state-of-the-art baselines and mitigates up to 8% of negative effects off distribution shift on backbones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call