Abstract

Representation learning aims to encode the relationships of research objects into low-dimensional, compressible, and distributed representation vectors. The purpose of network representation learning is to learn the structural relationships between network vertices. Knowledge representation learning is oriented to model the entities and relationships in knowledge bases. In this paper, we first introduce the idea of knowledge representation learning into network representation learning, namely, we propose a new approach to model the vertex triplet relationships based on DeepWalk without TransE. Consequently, we propose an optimized network representation learning algorithm using multi-relational data, MRNR, which introduces the multi-relational data between vertices into the procedures of network representation learning. Importantly, we adopted a kind of higher order transformation strategy to optimize the learnt network representation vectors. The purpose of MRNR is that multi-relational data (triplets) can effectively guide and constrain the procedures of network representation learning. The experimental results demonstrate that the proposed MRNR can learn the discriminative network representations, which show better performance on network classification, visualization, and case study tasks compared to the proposed baseline algorithms in this paper.

Highlights

  • Network representation learning (NRL) plays a pivotal role in many areas, which aims at learning low-dimensional, compressed, and distributed representation vectors for all kinds of networks.Network representation learning tasks can be intuitively regarded as network encoding tasks, where each node is given a unique vector in representation space, the neighboring vertices possess a closer space distance based on the distance evaluation function

  • We introduced the idea of knowledge representation learning into the network representation learning, which is that we used knowledge triplets to constrain the training procedure of network representation learning

  • We found that the classification performance of text-associated deep walk (TADW) was inferior to that of the MRNR algorithm on the Citeseer, Database System and Logic Programming (DBLP), and Simplified DBLP (SDBLP)

Read more

Summary

Introduction

Network representation learning (NRL) plays a pivotal role in many areas, which aims at learning low-dimensional, compressed, and distributed representation vectors for all kinds of networks.Network representation learning tasks can be intuitively regarded as network encoding tasks, where each node is given a unique vector in representation space, the neighboring vertices possess a closer space distance based on the distance evaluation function. Network representation learning (NRL) plays a pivotal role in many areas, which aims at learning low-dimensional, compressed, and distributed representation vectors for all kinds of networks. Distributed representation learning is derived from a language-embedding application. The representative algorithm of language embedding is Word2Vec, proposed by Mikolov et al [1,2]. Motivated by the Word2Vec algorithm, the network representation learning algorithm was proposed by Perozzi [3], who adopted the random walk strategy to generate the vertex sequences, which are the same as “sentences” in language models. Vn as the input of the network representation learning model, where n denotes the length of the random walk sequence. The output of DeepWalk is a low-dimensional vector, rv ∈ Rk , where k is the size of the network representation vector. The DeepWalk algorithm has been successfully applied to many tasks [4]

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call