The aim of network embedding is to learn compact node representations. This has been shown to be effective in various downstream learning tasks, such as link prediction and node classification. Most methods focus on preserving different network structures and properties, ignoring the fact that networks are usually noisy and incomplete, thus such methods potentially lack robustness and suffer from the overfitting issue. Recently, generative adversarial networks based methods have been exploited to impose a prior distribution on node embeddings to encourage a global smoothness, but their model architecture is very complicated and they suffer from the non-convergence problem. Here, we propose adversarial training (AdvT), a more succinct and effective local regularization method, for negative-sampling-based network embedding to improve model robustness and generalization ability. Specifically, we first define the adversarial perturbations in the embedding space instead of in the discrete graph domain to circumvent the challenge of generating discrete adversarial examples. Then, to enable more effective regularization, we design the adaptive l2 norm constraints on adversarial perturbations that depend upon the connectivity pattern of node pairs. We integrate AdvT into several famous models including DeepWalk, LINE and node2vec, and conduct extensive experiments on benchmark datasets to verify its effectiveness.
Read full abstract