Abstract
Graph Transformer architecture started to catch fire in molecular properties prediction due to its ability to represent complex interactions between all nodes. However, the self-attention mechanism in Transformer encoder part transforms the graph data into a fully-connected graph for graph representation learning, causing the loss of the original structural information of the graph. In this work, Local Transformer is proposed to keep the original graph structure and aggregate local information of nodes. In the model, a simple graph convolution is designed to replace the self-attention module, which reaches the state-of-the-art performance on ZINC dataset. Further, in order to rectify the problem that graph neural networks(GNNs) cannot capture long-range interactions of atoms, a novel end-to-end framework combined the GNN with Local and Global Transformer is proposed, which achieves good results on QM9 dataset.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have