Abstract

Knowledge graph (KG) embedding models are proposed to encode entities and relations into a low-dimensional vector space, in turn, can support various machine learning models on KG completion with good performance and robustness. However, the current entity ranking protocol about KG completion cannot adequately evaluate the impacts of KG embedding models in real-world applications. However, KG embeddings is not widely used as word embeddings. An asserted powerful KG embedding model may not be effective in downstream tasks. So in this paper, we commit to finding the answers by using downstream tasks instead of entity ranking protocol to evaluate the effectiveness of KG embeddings. Specifically, we conduct comprehensive experiments on different KG embedding models in KG based recommendation and question answering tasks. Our findings indicate that: 1) Modifying embeddings by considering more complex KG structural information may not achieve improvements in practical applications, such as updating TransE to TransR. 2) Modeling KG embeddings in non-euclidean space can effectively improve the performance of downstream tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call