Abstract
Recently, graph neural networks (GNNs) have shown to be effective in learning representative graph features. However, current pooling-based strategies for graph classification lack efficient utilization of graph representation information in which each node and layer have the same contribution to the output of graph-level representation. In this paper, we develop a novel architecture for extracting an effective graph representation by introducing structured multi-head self-attention in which the attention mechanism consists of three different forms, i.e., node-focused, layer-focused and graph-focused. In order to make full use of the information of graphs, the node-focused self-attention firstly aggregates neighbor node features with a scaled dot-product manner, and then the layer-focused and graph-focused self-attention serve as readout module to measure the importance of different nodes and layers to the model’s output. Moreover, it is able to improve the performance on graph classification tasks by combining these two self-attention mechanisms with base node-level GNNs. The proposed Structured Self-attention Architecture is evaluated on two kinds of graph benchmarks: bioinformatics datasets and social network datasets. Extensive experiments have demonstrated superior performance improvement to existing methods on predictive accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.