Abstract

Research on graph classification tasks based on graph neural networks has attracted wide attention. The graphs to be classified may have various graph sizes (i.e., different numbers of nodes and edges) and have various graph properties (e.g., average node degree, diameter, and clustering coefficient). The diverse property of graphs has imposed significant challenges on existing graph learning techniques since diverse graphs have different best-fit hyperparameters. Consequently, it is unreasonable to learn graph representation from a set of diverse graphs by a unified graph neural network. Inspired by this, we design an end-to-end Multiplex Graph Neural Network (MxGNN) that learns graph representations with multiple GNNs, and combines them with a learnable method. The main challenge lies with the combination of multiple representation results. Our new findings show that the a priori graph properties do have an effect on the quality of representation learning, which can be used to guide the learning. Our experiments on graph classification with multiple data sets show that the performance of MxGNN is better than the existing graph representation learning methods.

Highlights

  • Graphs are known to have complicated structures and have a myriad of real-world applications

  • Many newly proposed graph learning approaches are inspired by Convolutional Neural Networks (CNNs) [1], which have been greatly successful in learning two-dimensional image data

  • We propose MxGNN1 in graph representation learning for graph classification tasks

Read more

Summary

Introduction

Graphs are known to have complicated structures and have a myriad of real-world applications. Many newly proposed graph learning approaches are inspired by Convolutional Neural Networks (CNNs) [1], which have been greatly successful in learning two-dimensional image data (grid structure). A multitude of different Graph Convolutional Networks (GCNs) [2] have been proposed, which can learn node level representations by aggregating feature information from neighbors (spatial-based approaches) [3] or by introducing filters from the perspective of graph signal processing (spectral-based approaches) [4]. When performing node-representation learning tasks (by graph convolution operation), it is enough to use small output embedding size for simple and small graphs, as shown, since large embedding size could result in an overfitting problem.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call