Graph neural networks (GNNs) find applications in various domains such as computational biology, natural language processing, and computer security. Owing to their popularity, there is an increasing need to explain GNN predictions since GNNs are black-box machine learning models. One way to address this issue involves using counterfactual reasoning where the objective is to alter the GNN prediction by minimal changes in the input graph. Existing methods for counterfactual explanation of GNNs are limited to instance-specific local reasoning. This approach has two major limitations of not being able to offer global recourse policies and overloading human cognitive ability with too much information. In this work, we study the global explainability of GNNs through global counterfactual reasoning. Specifically, we want to find a small set of representative counterfactual graphs that explains all input graphs. Towards this goal, we propose GCF Explainer , a novel algorithm powered by vertex-reinforced random walks on an edit map of graphs with a greedy summary . Extensive experiments on real graph datasets show that the global explanation from GCF Explainer provides important high-level insights of the model behavior and achieves a 46.9% gain in recourse coverage, a 9.5% reduction in recourse cost compared to the state-of-the-art local counterfactual explainers. We also demonstrate that GCF Explainer generates explanations that are more consistent with input dataset characteristics, and is robust under adversarial attacks. In addition, K-GCF Explainer , which incorporates a graph clustering component into GCF Explainer , is introduced as a more competitive extension for datasets with a clustering structure, leading to superior performance in three out of four datasets in the experiments and better scalability.
Read full abstract