Abstract

Existing word embeddings are trained based on word co-occurrence information, which shows to capture well the semantic information of words. However, only using word co-occurrence information is not enough for training word embeddings, the huge knowledge graph resources cannot be ignored for improving the performance of these word embeddings. This paper proposes a novel knowledge-guided word embedding fine-tuning model called KGWE, which aims to utilize the entity embedding from the knowledge graphs to guide the fine-tuning process of word embeddings. The main idea of KGWE is to use entity embeddings as the targets for fine-tuning the representations of words appeared in the entity description of a knowledge graph. We conduct experiments with two famous static word embeddings and five knowledge graph embedding models and evaluate the model with 14 word similarity benchmarks. Our results show that word embeddings fine-tuned through KGWE can outperform the baselined word embeddings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call