Abstract

Knowledge graph embedding concepts have become popular in recent years. The most common usage area of this concept is knowledge graph completion because ordinarily, knowledge graphs consist of a large amount of structured data, but they are not complete. To perform the completion task, embedding models use positive triples, which are in the knowledge graph and some negative triples which artificially generated for each positive triples. This study is aimed to investigate the effects of negative sampling on the knowledge graph completion task. Several experiments are realized with well-known knowledge graph embedding models on different sized knowledge graphs to show these effects. We applied TransE, ComplEx, DistMult, and ConvKB models to Kinship and FB15k-237 knowledge graphs. In addition, as negative sampling method, we used random-corruption negative sampling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call