Abstract

The Transformer-based graph neural network models have achieved remarkable results in graph representation learning in recent years. One of the main challenges in graph representation learning with Transformer architecture is the non-existence of a universal positional encoding. Standard position encoding methods usually evolve the usage of the graph Laplacian matrix eigenvectors. However, exploiting the structural information from these eigenvectors failed to perform graph learning tasks requiring the node’s local structures. In our work, we propose a novel node encoding that leverages both the node’s global position information and the node’s local structural information, which can generalize well for a wide range of graph learning tasks. The global position encoding branch operates on the eigenvalues and eigenvectors of the Laplacian matrix of the entire graph. The structural encoding branch is derived through the spectral-based encoding of the local subgraph. It represents the local properties, which are usually omitted in the Laplacian position encoding because of the cutoff of high graph frequencies. Two encoding branches are designed with learnable weights and mapped into predefined embedding spaces. Then, a weighted combination is employed to create a unique location encoding for each node. We validate the efficiency of our proposed encoding through various graph learning datasets, including node classification, link prediction, graph classification, and graph regression tasks. The overall results demonstrate that our structural and positional encoding can balance between the local and global structural information and outperforms most of the baseline models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call