Abstract

The goal of network representation learning, also called network embedding, is to encode the network structure information into a continuous low-dimensionality embedding space where geometric relationships among the vectors can reflect the relationships of nodes in the original network. The existing network representation learning methods are always single-task learning, in which case these methods focus on preserving the proximity of nodes from one aspect. However, the proximity of nodes is dependent on both the local and global structure, resulting in a limitation on the node embeddings learned by these methods. In order to solve this problem, in this paper, we propose a novel method, Multi-Task Learning-Based Network Embedding, termed MLNE. There are two tasks in this method so as to preserve the proximity of nodes. The aim of the first task is to preserve the high-order proximity between pairwise nodes in the whole network. The second task is to preserve the low-order proximity in the one-hop area of each node. By jointly learning these tasks in the supervised deep learning model, our method can obtain node embeddings that can sufficiently reflect the roles that nodes play in networks. In order to demonstrate the efficacy of our MLNE method over existing state-of-the-art methods, we conduct experiments on multi-label classification, link prediction, and visualization in five real-world networks. The experimental results show that our method performs competitively.

Highlights

  • A network is an important way of representing the relationships between objects, for example, in social networks, state grids, and citation networks (Gong et al, 2017)

  • We propose a multi-task learning-based network embedding called MLNE

  • We find that MLNE has good performance

Read more

Summary

INTRODUCTION

A network is an important way of representing the relationships between objects, for example, in social networks, state grids, and citation networks (Gong et al, 2017). Cao et al proposed a deep neural network for learning graph representations (DNGR) (Cao et al, 2016) Both SDNE and DNGR follow the encoder-decoder framework, where the encoder maps a high-dimensionality feature vector into a lowerdimensionality representation and the decoder reconstructs the original feature vector from that. They build a proximity matrix in which an element represents the pairwise node proximity and apply an autoencoder model to learn representations from that matrix. The shared encoder encodes the global feature information into a low-dimensionality node embedding, and the decoder decodes that information from the learned embeddings Another task is to preserve the local features.

PRELIMINARIES
Notations and Definitions
PPMI and Random Surfing
Multi-Task Learning
THE FRAMEWORK
An Overview of the Framework
Multi-Task Learning Model
Datasets
Baseline Algorithms
Parameter Setting
Link Prediction
Node Classification
Visualization
DATA AVAILABILITY STATEMENT
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call