Abstract

Graph neural networks are widely utilized for processing data represented by graphs, which renders them ubiquitous in daily life. Due to their excellent performance in extracting features from structural data, graph neural networks have attracted an increasing amount of attention from both academia and industry. Essentially, most GNN models learn representations of nodes by fully/randomly aggregating their neighbor features. However, these unsophisticatedly-designed aggregation schemes always lead to a lack of interpretability, compromising the scope of adoption of GNN models. This study attempts to construct a transparent and explainable GNN model by distilling knowledge from pretrained "black-box" models. Specifically, by simultaneously preserving fidelity to the behaviors of the original model and optimizing the loss of prediction, a shallow graph neural network with explicit "contribution" weights between two nodes is trained. Next, a neighbor selection strategy is built upon these explicit weights to ensure high levels of performance and interpretability. To evaluate the proposed framework, our method is incorporated into four state-of-the-art models: GCN, GAT, GraphSAGE, and AM-GCN. Experimental results on three real-world datasets show the effectiveness of the proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call