Abstract

Knowledge Representation Learning (KRL) has been playing an essential role in many AI applications and achieved desirable results for some downstream tasks. However, two main issues of existing KRL embedding techniques have not been well addressed yet. One is that the size of input datasets processed by these embedding models is typically not large enough to accommodate large-scale real-world knowledge graphs; the other issue is that lacking a unified framework to integrate current KRL models to facilitate the realization of embeddings for various applications. We propose DKRL, which is a distributed KRL training framework that can incorporate different KRL models in the translational category using a unified algorithm template. In DKRL, a set of primitive interface functions is defined to be implemented by various knowledge embedding models to form a unified algorithm template for distributed KRL. The effectiveness and efficiency of our framework have been verified by extensive experiments on both benchmark and real-world knowledge graphs, which show that our approach can outperform the existing ones by a large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call