Abstract

Reasoning on knowledge graphs (KGs) is significant for downstream applications, such as question answering and information extraction. On the basis of using factual triples in KGs, learning logic rules has been explored to handle KG reasoning in an interpretable pattern. Some rule learning methods simply apply thresholds to reduce the exponential complexity caused by the vast candidate rules derived from KGs. Others are designed to prune candidates on the basis of embedding facts in the KG. However, these methods only consider the factual triples, which mainly contain structural information, and ignore sufficient semantic information in real-world KGs. This situation weakens the methods in searching for high-quality rules. To this end, we propose a joint method by incorporating logic rules with textual representations for interpretable KG reasoning, named LoTus. Firstly, LoTus obtains the structural embeddings with only factual triples, and the textual embeddings with descriptions of entities, which are later integrated into the representations of entities and relations. Secondly, top candidate rules are selected by a proposed joint pruning strategy incorporating structural and textual representations in KGs. Finally, LoTus captures high-quality rules and removes low-quality ones by a filtering process with metrics of rules. It reduces the time complexity on KG reasoning by learning rules and provides interpretability simultaneously. Extensive experiments on different datasets show that LoTus obtains competitive performance comparing to baselines on KG reasoning metrics Hits@1, Hits@10 and mean reciprocal rank (MRR), and achieves significant superiority among the recent rule learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call