Abstract

Graph Neural Networks (GNNs) have recently achieved remarkable success in learning graph structures and have been applied in a variety of practical applications such as medical diagnosis, drug discovery, chemical compound synthesis, and traffic forecasting. However, like other deep learning methods, GNNs are black-box networks with inexplicable results, restricting the applicability of GNNs in many important applications such as biochemistry and medicine. This paper focuses on the field of explaining graph learning methods, which is still less explored and lacks strong theoretical grounds. Based on Shapley value from game theory, we propose a novel graph explainer, named Shapley and Embedding-based Graph Explainer (SEGE), which builds a linear approximator in node embedding space to interpret the GNNs’ prediction of an input graph. Experiments on synthetic and real-world datasets demonstrate that SEGE achieves state-of-the-art performance and is significantly better than baseline models both qualitatively and quantitatively. Moreover, SEGE owns the capacity of automorphic equivalence preservation, which is not observed in existing explainers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call