Abstract

Network representation learning is a key research field in network data mining. In this paper, we propose a novel multi-view network representation algorithm (MVNR), which embeds multi-scale relations of network vertices into the low dimensional representation space. In contrast to existing approaches, MVNR explicitly encodes higher order information using k-step networks. In addition, we introduce the matrix forest index as a kind of network feature, which can be applied to balance the representation weights of different network views. We also research the relevance amongst MVNR and several excellent research achievements, including DeepWalk, node2vec and GraRep and so forth. We conduct our experiment on several real-world citation datasets and demonstrate that MVNR outperforms some new approaches using neural matrix factorization. Specifically, we demonstrate the efficiency of MVNR on network classification, visualization and link prediction tasks.

Highlights

  • The network representation learning aims at learning and obtaining the low-dimensional, compressed and dense distributed representation vectors for various kinds of networks

  • The method is based on the joint learning model and mainly improves the performance of network representation learning by means of introducing other attribute information of the network, such as community information, text contents and labels and so forth

  • Experimental results show that multi-view network representation algorithm (MVNR) takes 15 minutes and 01 second to train the model of network representation learning, GraRep takes 5 min and 35 s to train the model of network representation learning when K is set to 6 and NEU takes about 7 seconds to convert the learnt network representations trained by DeepWalk

Read more

Summary

Introduction

The network representation learning aims at learning and obtaining the low-dimensional, compressed and dense distributed representation vectors for various kinds of networks. It can be straightforwardly considered as the network encoding task for the networks, the nearest neighboring vertices have closer distance in the network representation space of lower dimension. Is able to incorporate higher-order network information by multiple steps of random walks In this respect, WALKLETS [2] has conducted some positive explorations and researches and proved that multi-step random walk can encode higher-order features into the network representations. High-order network representation learning can excavate valuable features only based on the existing network structural information in some sparse networks. GraRep [4] and Network Embedding

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call