Abstract
In the Transformer architecture, positional encoding is a vital component because it provides the model with information about the structure and position of data. In Graph Transformer, there have been attempts to introduce different positional encodings and inject additional structural information. Therefore, in terms of integrating positional and structural information, we propose a Structural and Positional Ensembled Graph Transformer (SPEGT). We developed SPEGT by noting the different properties of structural and positional encodings of graphs and the similarity of their computational processes. We have set a unified component that integrates the functionalities: (i) Random Walk Positional Encoding, (ii) Shortest Path Distance between each node, and (iii) Hierarchical Cluster Encoding. We find a problem with a well-known positional encoding and experimentally verify that combining it with other encodings can solve their problem. In addition, SPEGT outperforms previous models on a variety of graph datasets. We also show that SPEGT using unified positional encoding, performs well on structurally indistinguishable graph data through error case analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.