Many large knowledge graphs are now available and ready to provide semantically structured information that is regarded as an important resource for question answering and decision support tasks. However, they are built on rigid symbolic frameworks which makes them hard to be used in other intelligent systems. Knowledge graph embedding approaches are gaining increasing attention, which embeds symbolic entities and relations into continuous vector spaces. Such graph embeddings are often learned by training a model to distinguish true triples from negative ones. Unfortunately, the negative triples created by replacing their heads or tails with randomly selected entities are easily identified by the model, which makes them insufficient to train useful models. To this end, we propose a method under a generative adversarial architecture to learn graph embeddings, in which a generative network is trained to provide continually improved “plausible” triples whereas a discriminative network learns to distinguish truth triples from the others by competing with the generator in a two-player minimax game. When arriving at a convergence, the generative network recovers the training data and can be used for knowledge graph completion, while the discriminative network is trained to be a good triple classifier. Extensive experiments demonstrate our method can improve multiple graph embedding models with a significant margin on both link prediction and triple classification tasks.
Read full abstract