Abstract

Knowledge graphs are playing a crucial role in many machine learning applications. Since most of the knowledge graphs are far from complete, many knowledge graph completion models have been proposed. TransE and its extended models all model knowledge graphs with additive interaction. DistMult demonstrates that multiplicative interaction is more effective for modeling knowledge graphs. However, DistMult performs poorly on 1-to-N, N-to-1 and N-to-N relations. Besides, it does not consider the case that an entity could be a head entity or a tail entity which should be modeled separately. We propose a more fine-grained knowledge graph embedding model called MultE, which models knowledge graphs with multiplicative interaction. In MultE, an entity would have one representation when serving as head entity and another representation when serving as tail entity. For improving performance on 1-to-N, N-to-1 and N-to-N relations, MultE considers all valid tail(or head) entities during training and treats the prediction task as a classification problem. Experiment results show that MultE obtains consistent improvements compared with DistMult and achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call