Abstract

The essence of knowledge representation learning is to embed the knowledge graph into a low-dimensional vector space to make knowledge computable and deductible. Semantic indiscriminate knowledge representation models usually focus more on the scalability on real world knowledge graphs. They assume that the vector representations of entities and relations are consistent in any semantic environment. Semantic discriminate knowledge representation models focus more on precision. They assume that the vector representations should depend on the specific semantic environment. However, both the two kinds only consider knowledge embedding in semantic space, ignoring the rich features of network structure contained between triplet entities. The MulSS model proposed in this paper is a joint embedding learning method across network structure space and semantic space. By synchronizing the Deepwalk network representation learning method into the semantic indiscriminate model TransE, MulSS achieves better performance than TransE and some semantic discriminate knowledge representation models on triplet classification task. This shows that it is of great significance to extend knowledge representation learning from the single semantic space to the network structure and semantic joint space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call