Abstract

Recently, unsupervised graph representation learning has attracted considerable attention through effectively encoding graph-structured data without semantic annotations. To accelerate its training, noise contrastive estimation (NCE) samples uniformly negative examples to fit an unnormalized graph model. However, this uniform sampling strategy may easily lead to slow convergence, even the vanishing gradient problem. In this paper, we theoretically show that sampling those hard negatives close to the current anchor can relieve the above difficulties. With this finding, we then propose an Adaptive Negative Sampling strategy, namely AdaNS, which efficiently samples the hard negatives from the mixing distribution regarding the dimensional elements of the current node representation. Experiments show that our AdaNS sampling strategy applied on top of representative unsupervised models, e.g., DeepWalk, GraphSAGE, can outperform the existing negative sampling strategies in the tasks of node classification and visualization. This also further demonstrates that sampling those hard negatives can bring performance improvements for learning the node representations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call