Subgraph representation learning is a burgeoning field within graph representation learning. However, current methods in this field face several issues, including the inability to facilitate information interaction between subgraphs, the destruction of original node feature information, and the detrimental effect of the structural information of the entire graph on the extracted subgraph structural information. To address these issues, this paper proposes a method named the subgraph autoencoder with bridge nodes (SAWBN). Specifically, first, SAWBN enhances the data interaction capability by adding a bridge node and connecting it to all nodes within a subgraph. The addition of the bridge node creates indirect connections between subgraphs, enabling them to interact and exchange information during the message-passing phase. To the best of our knowledge, this is the first paper to propose the concept of bridge nodes and to introduce bridge nodes into subgraph representation learning. Second, SAWBN utilizes a new message-passing paradigm called graph convolution with different weights (GDW). During the convolution process, GDW can assign different weights to nodes inside and outside a subgraph, effectively distinguishing the importance of nodes without compromising the original node feature information. Third, SAWBN uses an autoencoder to reconstruct subgraph node features. This reconstruction can eliminate the interference of the structural information of the entire graph and regenerate subgraph node features that are rich in subgraph structural information. Finally, these subgraph node features are aggregated to achieve a good representation of the subgraph structural information. Extensive results on four real-world subgraph datasets demonstrate that SAWBN outperforms state-of-the-art baselines. The source code of SAWBN is publicly available at https://github.com/denggaoqin/SAWBN.
Read full abstract