Abstract

With the rapid development of neural networks, much attention has been focused on network embedding for complex network data, which aims to learn low-dimensional embedding of nodes in the network and how to effectively apply learned network representations to various graph-based analytical tasks. Two typical models exist namely the shallow random walk network representation method and deep learning models such as graph convolution networks (GCNs). The former one can be used to capture the linear structure of the network using depth-first search (DFS) and width-first search (BFS), whereas Hierarchical GCN (HGCN) is an unsupervised graph embedding that can be used to describe the global nonlinear structure of the network via aggregating node information. However, the two existing kinds of models cannot simultaneously capture the nonlinear and linear structure information of nodes. Thus, the nodal characteristics of nonlinear and linear structures are explored in this paper, and an unsupervised representation method based on HGCN that joins learning of shallow and deep models is proposed. Experiments on node classification and dimension reduction visualization are carried out on citation, language, and traffic networks. The results show that, compared with the existing shallow network representation model and deep network model, the proposed model achieves better performances in terms of micro-F1, macro-F1 and accuracy scores.

Highlights

  • The purpose of network embedding is to map high-dimensional sparse network data into low-dimensional, dense and real-valued vector space, which can be used to adaptively extract features and facilitate analysis of downstreamH

  • Because existing shallow and deep learning models cannot simultaneously capture the nonlinear and linear structural information of nodes, we introduce hierarchical graph convolution network to put forward the Hierarchical GCN (HGCN)-L and HGCN-DL methods respectively

  • In order to verify the effectiveness of the model, node classification experiments were conducted for citation network, traffic network, and language network data sets

Read more

Summary

Introduction

The purpose of network embedding is to map high-dimensional sparse network data into low-dimensional, dense and real-valued vector space, which can be used to adaptively extract features and facilitate analysis of downstream. H. Zhang et al / Network representation learning method embedding linear and nonlinear network structures. F methods include a shallow random walk model and a semi-supervised deep learning model based on graph convolution networks (GCNs). A random walk is usually used in the P aforementioned shallow models because it can capture the linear structure of the network. E tures of nodes and obtains global nonlinear structural features of nodes via aggregating information from neighboring nodes with HGCN. It fuses the two kinds of extracted features and applies them to downstream. Network embedding of the input graph is obtained by extracting nonlinear and linear structural features,.

Related work
Matrix
Machine learning
Structure
Graph neural networks
Shallow linear structure representation
HGCN nonlinear structure representation model
Experiments
Data sets
E DeepWalk
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.