Abstract

Graph neural networks (GNNs) have been ubiquitous in graph node classification tasks. Most GNN methods update the node embedding iteratively by aggregating its neighbors' information. However, they often suffer from negative disturbances, due to edges connecting nodes with different labels. One approach to alleviate this negative disturbance is to use attention to learn the weights of aggregation, but current attention-based GNNs only consider feature similarity and suffer from the lack of supervision. In this article, we consider label dependency of graph nodes and propose a decoupling attention mechanism to learn both hard and soft attention. The hard attention is learned on labels for a refined graph structure with fewer interclass edges so that the aggregation's negative disturbance can be reduced. The soft attention aims to learn the aggregation weights based on features over the refined graph structure to enhance information gains during message passing. Particularly, we formulate our model under the expectation-maximization (EM) framework, and the learned attention is used to guide label propagation in the M-step and feature propagation in the E-step, respectively. Extensive experiments are performed on six well-known benchmark graph datasets to verify the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.