Abstract

Embedding knowledge graph into continuous space(s) is attracting more and more research attention, and lots of novel methods have been proposed. Among them, translation based methods achieved state-of-the-art experimental results. However, most of existing work ignore following two facts. First, once a relation is fixed, its linked head and tail entities will be fixed to a certain extent. Second, in a triplet, if one of its entities and the relation are fixed, the other entity's candidates will also be fixed to a certain extent. Taking these two facts into consideration, we propose a new knowledge graph embedding model named TransP, which defines a head entity space and a tail entity space for each relation. During embedding, TransP first projects entities into these two position spaces. Then the entities in these two position spaces are further projected into a common transformation space, in which the relation is converted into two transformation matrices. A symmetrical score function is designed to connect a correct triplet's head and tail entity in the common space. The basic idea behind this score function is that if a correct triplet holds, its head (tail) entity should be able to be converted into its tail (head) entity when taking the relation's transformation matrix as an intermediate bridge. Viewing the transformation matrices as decoders, this process is just like a common translation process. We evaluate TransP on triplet classification task and link prediction task. Extensive experiments show that TransP achieves much better performance than other baseline models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call