Abstract

The quality of embeddings is crucial for downstream tasks in knowledge graphs. Researchers usually introduce neural network architecture search into knowledge graph embedding for machine automatic construction of appropriate neural networks for each dataset. An existing approach is to divide the search space into macro search space and micro search space. The search strategy for micro space is based on one-shot weight sharing strategy, but it will lead to all the information obtained from the previous supernet training is discarded and the advantages of one-shot algorithm are not fully utilized. In this paper, we conduct experiments on common datasets for two important downstream tasks of knowledge graph embedding entity alignment and link prediction problems, respectively-and compare the search performance with existing manually designed neural networks as well as good neural network search algorithms. The results show that the improved algorithm can search better architectures for the same time when experiments are performed on the same dataset; the improved algorithm takes less time to search architectures with similar performance. Also, the improved algorithm searched the model on the dataset due to the human optimal level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call