Abstract

Knowledge graph (KG) embedding models are proposed to encode entities and relations into a low-dimensional vector space, in turn, can support various machine learning models on KG completion with good performance and robustness. However, the current entity ranking protocol about KG completion cannot adequately evaluate the impacts of KG embedding models in real-world applications. However, KG embeddings are not widely used as word embeddings. An asserted powerful KG embedding model may not be effective in downstream tasks. So in this paper, we commit to finding the answers by using downstream tasks instead of entity ranking protocol to evaluate the effectiveness of KG embeddings. Specifically, we conduct comprehensive experiments on different KG embedding models in KG based question answering, recommendation and natural language processing tasks. Utilizing different genre of downstream tasks, we try to mine the characteristics of each KGE model in actual application scenarios, and provide guidance for the research of KGE models and knowledge-enhanced downstream tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.