Abstract

<p indent=0mm>Knowledge graph embedding maps the discrete symbolic entities and relations into continuous low-dimensional space, enabling the downstream task (e.g., inference, knowledge graph completion and etc.) to be carried on in an algebraic manner. Roughly speaking, there are two main-streams towards knowledge graph embedding, translational distance models and semantic matching models. The learning bottleneck of the former caused by special graph structures is analyzed. And it is found out that translational distance models are to be blamed for treating head and tail entities from a same viewpoint. So, a location-sensitive embedding (LSE) model is proposed. Unlike previous models, it just transforms the head entity with relational-specific mapping. And it models relation as a general linear transformation instead of a translational operator. Its representation capacity and the relationship to current models are theoretically analyzed. Also, to make it efficient, the model is simplified by assuming a diagonal matrix for transformation, forming a simplified model LSE<italic><sub>d</sub></italic>. It is evaluated that the proposed model on four large-scale knowledge graph datasets for link prediction task. Experiments have shown that the proposed model achieves highest, or at least competitive, performance compared with the state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call