Abstract

The goal of network representation learning is to embed each vertex in a network into a low-dimensional vector space. Existing network representation learning methods can be classified into two categories: homogeneous models that learn the representation of vertexes in a homogeneous information network, and heterogeneous models that learn the representation of vertexes in a heterogeneous information network. In this paper, we study the problem of representation learning of heterogeneous information networks which recently attracts numerous researchers’ attention. Specifically, the existence of multiple types of nodes and links makes this work more challenging. We develop a scalable representation learning models, namely SERL. The SERL method formalizes the way to fuse different semantic paths during the random walk procedure when exploring the neighborhood of corresponding node and then leverages a heterogeneous skip-gram model to perform node embeddings. Extensive experiments show that SERL is able to outperform state-of-the-art learning models in various heterogenous network analysis tasks, such as node classification, similarity search and visualization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call