Abstract
As the scope of receptive field and the depth of Graph Neural Networks (GNNs) are two completely orthogonal aspects for graph learning, existing GNNs often have shallow layers with truncated-receptive field and far from achieving satisfactory performance. In this article, we follow the idea of decoupling graph convolution into propagation and transformation processes, which generates representations over a sequence of increasingly larger neighborhoods. Though this manner can enlarge the receptive field, it has two critical problems unsolved: how to find the suitable receptive field to avoid under-smoothing or over-smoothing? and how to balance different diffusion operators for better capturing the local and global dependencies? We tackle these challenges and propose a S calable, A daptive G raph C onvolutional N etworks ( SAGCN ) with Transformer architecture. Concretely, we propose a novel non-heuristic metric method that quickly finds the suitable number of diffusing iterations and produces smoothed local embeddings that enable the truncated receptive field to become scalable and independent of prior experience. Furthermore, we devise smooth2seq and diffusion-based position schemes introduced into Transformer architecture for better capturing local and global information among embeddings. Experimental results show that SAGCN enjoys high accuracy, scalability and efficiency on various open benchmarks and is competitive with other state-of-the-art competitors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.