Abstract

Knowledge graphs (KGs) have a wide range of applications, such as recommender systems, relation extraction, and intelligent question answering systems. However, existing KGs are far from complete. Knowledge graph reasoning (KGR) has been studied to complete KGs by inferring missing entities or relations. But most previous methods require that all entities should be seen during training, which is impractical for real-world KGs with new entities emerging daily. In this paper, we address the open-world KGR task: how to perform reasoning when entities are not observed at training time. The description-embodied knowledge representation learning (DKRL) model attempts to study the open-world KGR task. We find that DKRL does not consider hierarchical type information contained in entities when learning entities and relations representations, which results in poor performance. To address this problem, we propose a novel model, SDT, that incorporates the structural information, entity descriptions, and hierarchical type information of entities into a unified framework to learn more representative embeddings for KGs. Specifically, for entity descriptions, we explore continuous bag-of-words and convolutional neural networks models to encode the semantics of entity representations. For hierarchical types, we utilize a recursive hierarchy encoder and a weighted hierarchy encoder to construct the projection matrices of hierarchical types. We evaluate the SDT model on both open-world and closed-world reasoning tasks, including entity prediction and relation prediction. Experimental results on large-scale datasets show that SDT achieves a lower mean rank and higher Hits@10 than the baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call