Abstract

Knowledge representation learning techniques process the knowledge graph, embedding entities and relationships into a continuous dense low-dimensional vector space, and providing the rich semantic association information between entities embedded in the knowledge graph to the recommendation module, improving recommendation performance and providing better interpretability. However, the knowledge representation learning techniques based on the translation model TransD with too many parameters and no association between entity representations are difficult to apply to large knowledge graph, as well as the problem is that most existing knowledge graph which based on recommendation systems ignore the different levels of importance that users attach to different relationships of items. To address these shortcomings, we propose an improved knowledge representation learning model Cluster TransD and a recommendation model Cluster TransD-GAT based on knowledge graph and graph attention networks, where the Cluster TransD model reduces the number of entity projections, makes the association between entity representations, reduces the computational pressure, and makes it better to be applied to the large knowledge graph, and the Cluster TransD-GAT model can capture the attention of different users to different relationships of items. Extensive comparison and ablation experiments on three real datasets show that the model proposed in this paper has significant performance improvements in terms of average ranking, accuracy and recall of the scoring function compared to other state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call