Abstract

Knowledge graph completion intends to infer the entities that need to be queried through the entities and relations known in the knowledge graphs. It is used in many applications, such as question and answer systems, and searching engines. As the completion process can be represented as a Markov process, existing works would solve this problem with reinforcement learning. However, there are three issues blocking them from achieving high accuracy, which are reward sparsity, missing specific domain rules, and ignoring the generation of knowledge graphs. In this paper, we design a generative adversarial net (GAN)-based reinforcement learning model, named GRL, for knowledge graph completion. First, GRL employs the graph convolutional network to embed the knowledge graphs into the low-dimensional space. Second, GRL employs both GAN and long short-term memory (LSTM) to record trajectory sequences obtained by the agent from traversing the knowledge graph and generate new trajectory sequences if needed. At the same time, GRL applies domain-specific rules accordingly. Finally, GRL employs the deep deterministic policy gradient method to optimize both rewards and adversarial loss. The experiments show that GRL is able to both generate better policies and outperform traditional methods for several tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.