Abstract

Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications. With a large KG, the embeddings consume a large amount of storage and memory. This is problematic and prohibits the deployment of these techniques in many real world settings. Thus, we propose an approach that compresses the KG embedding layer by representing each entity in the KG as a vector of discrete codes and then composes the embeddings from these codes. The approach can be trained end-to-end with simple modifications to any existing KG embedding technique. We evaluate the approach on various standard KG embedding evaluations and show that it achieves 50-1000x compression of embeddings with a minor loss in performance. The compressed embeddings also retain the ability to perform various reasoning tasks such as KG inference.

Highlights

  • Knowledge graphs (KGs) are a popular way of storing world knowledge, lending support to a number of AI applications such as search (Singhal, 2012), question answering (Lopez et al, 2013; Berant et al, 2013) and dialog systems (He et al, 2017; Young et al, 2018)

  • We evaluate our approach on various standard KG embedding evaluations and we find that we can massively reduce the size of the KG embedding layer while suffering only a minimal loss in performance

  • Link Prediction: We learn discrete representations corresponding to various continuous KG representations and compare the obtained discrete representations with their continuous counterparts

Read more

Summary

Introduction

Knowledge graphs (KGs) are a popular way of storing world knowledge, lending support to a number of AI applications such as search (Singhal, 2012), question answering (Lopez et al, 2013; Berant et al, 2013) and dialog systems (He et al, 2017; Young et al, 2018). There has been interest in learning embeddings of KGs in continuous vector spaces (Bordes et al, 2011, 2013; Socher et al, 2013). KG embedding approaches represent entities as learnable continuous vectors while each relation is modeled as an operation in the same space such as translation, projection, etc. (Bordes et al, 2013; Wang et al, 2014; Lin et al, 2015; Ji et al, 2015). These approaches give us a way to perform reasoning in KGs with simple numerical computation in continuous spaces

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call