Abstract

Knowledge representation learning attempts to represent entities and relations of knowledge graph in a continuous low-dimensional semantic space. However, most of the existing methods such as TransE, TransH, and TransR usually only utilize triples of knowledge graph. Other important information such as relation descriptions with relevant knowledge is still used ineffectively. To address these issues, in this paper, we propose a relation text-embodied knowledge representation learning method, in which relation descriptions are adopted as side information for representation learning. More specifically, we explore a convolutional neural model to build representations of fine-grained relation descriptions. Furthermore, knowledge representations of triples and representations of fine-grained relation descriptions are jointly embedding. Our model is evaluated on the tasks of both link prediction and triple classification. The experiment results show that our model exhibits a superior performance than other baselines, which demonstrates the availability of our method with fine-grained relation descriptions and knowledge graph jointly embedding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call