Abstract

The aim of network embedding is to learn compact node representations. This has been shown to be effective in various downstream learning tasks, such as link prediction and node classification. Most methods focus on preserving different network structures and properties, ignoring the fact that networks are usually noisy and incomplete, thus such methods potentially lack robustness and suffer from the overfitting issue. Recently, generative adversarial networks based methods have been exploited to impose a prior distribution on node embeddings to encourage a global smoothness, but their model architecture is very complicated and they suffer from the non-convergence problem. Here, we propose adversarial training (AdvT), a more succinct and effective local regularization method, for negative-sampling-based network embedding to improve model robustness and generalization ability. Specifically, we first define the adversarial perturbations in the embedding space instead of in the discrete graph domain to circumvent the challenge of generating discrete adversarial examples. Then, to enable more effective regularization, we design the adaptive l2 norm constraints on adversarial perturbations that depend upon the connectivity pattern of node pairs. We integrate AdvT into several famous models including DeepWalk, LINE and node2vec, and conduct extensive experiments on benchmark datasets to verify its effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.