Abstract

Graph Neural Networks (GNNs) have been applied in many fields of semi-supervised node classification for non-Euclidean data. However, some GNNs cannot make good use of positive information brought by nodes which are far away from each central node for aggregation operations. These remote nodes with positive information can enhance the representation of the central node. Some GNNs also ignore rich structure information around each central node's surroundings or entire network. Besides, most of GNNs have a fixed architecture and cannot change their components to adapt to different tasks. In this article, we propose a semi-supervised learning platform ATPGNN with three variable components to overcome the above shortcomings. This novel model can fully adapt to different tasks by changing its components and support inductive learning. The key idea is that we first create a high-order topology graph, which is from similarity of node structure information. Specifically, we reconstruct the relationships between nodes in a potential space obtained by network embedding in graph. Second, we introduce graph representation learning methods to extract representation information of remote nodes on the high-order topology graph. Third, we use some network embedding methods to get graph structure information of each node. Finally, we combine the representation information of remote nodes, graph structure information and feature for each node by attention mechanism, and apply them to learning node representation in graph. Extensive experiments on real attributed networks demonstrate the superiority of the proposed model against traditional GNNs.

Highlights

  • In past few years, Convolutional Neural Networks (CNNs) have developed rapidly and achieved great success

  • In view of the above limitations, we propose a new model– ATPGNN, which makes full use of the topology information of graph datasets, mines the nodes in the high-order neighborhood that provide positive information for the aggregation of central nodes, and uses these nodes to expand the aggregation neighborhood of each node

  • By analyzing some existing Graph Neural Networks (GNNs) and their variants, we summarize the defects of these models and classify them, and propose a new design scheme to guide the development of GNNs in the future

Read more

Summary

INTRODUCTION

Convolutional Neural Networks (CNNs) have developed rapidly and achieved great success. The three variable components are the network embedding method to get the high-order topology graph, the aggregation method in the topology neighborhood and the network embedding representations of nodes, which can be changed according to actual situation. We modularize the methods of extracting graph structure information and aggregating neighbors on the high-order topology graph, and propose a semi-supervised classification learning platform–ATPGNN with three variable components. Reference [24] proposed a new model, MixHop, that can learn a general class of neighborhood mixing relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. Reference [26] introduced a context-surrounding GNNs framework and proposed two smoothness metrics obtained from node feature and node label to measure the quantity and quality of node representation information in the neighborhood. The above works have their shortcomings, they brought us a lot of inspiration

BACKGROUND
MOTIVATION
MODEL ARCHITECTURE OF ATPGNN
BUILDING TOPOLOGY NEIGHBORHOOD
MODEL ALGORITHM DESCRIPTION
ANALYSIS OF THE ADVANTAGES OF ATPGNN GN
Findings
VIII. CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call