Abstract

Several KG embedding methods were proposed to learn low dimensional vector representations of entities and relations of a KG. Such representations facilitate the link prediction task, in the service of inference and KG completion. In this context, it is important to achieve both an efficient KG embedding and explainable predictions. During learning of efficient embeddings, sampling negative triples was highlighted as an important step as KGs only have observed positive triples. We propose an efficient simple negative sampling (SNS) method based on the assumption that the entities which are closer in the embedding space to the corrupted entity are able to provide high-quality negative triples. As for explainability, it actually constitutes a thriving research question especially when it comes to analyze KGs with their rich semantics rooted in description logics. Hence, we propose in this paper a new rule mining method on the basis of learned embeddings. We extensively evaluate our proposals through several experiments. We evaluate our SNS sampling method plugged to several KG embedding models through link prediction task performances on well-known datasets. Experimental results show that the SNS improves the prediction performance of KG embedding models, and outperforms the existing sampling methods. To assess the performance of our rule mining method with and without SNS, we mine and evaluate rules on three popular datasets. The extracted rules are evaluated as knowledge nuggets extracted from the KG and also as support for explainable link prediction. The overall results are good and open the way to many improvements and new perspectives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call