Abstract

Knowledge Representation Learning (KRL), which is also known as Knowledge Embedding, is a very useful method to represent complex relations in knowledge graphs. The low-dimensional representation learned by KRL models makes a contribution to many tasks like recommender system and question answering. Recently, many KRL models are trained using square loss or cross entropy loss based on Closed World Assumption (CWA). Although CWA is an easy way for training, it violates the link prediction task which exploits KRL. To overcome the drawback, in this paper, we introduce a new method, Type-based Prior Possibility Assumption (TPPA). TPPA calculates type based prior possibilities for missing triplets instead of zeros in the training process of KRL to weaken the bad influence of CWA. We compare TPPA with the baseline method CWA in ConvE and TuckER, two common frameworks for knowledge representation learning. The experiment results on FB15k-237 dataset show that TPPA based training method outperforms CWA based training method in link prediction task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call