Abstract

Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e., by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.

Highlights

  • Many interesting structures found in scientific and industrial applications can be expressed as graphs

  • Along with automatic differentiation capabilities of neural network software and the availability of predefined layers such as convolution or pooling, this implementation trick allows to implement Graph neural networks (GNNs)-Layer-wise Relevance Propagation (LRP) for complex GNN architectures without much code overhead. This implementation trick is used in a GNN-LRP demo code that we provide at https://git.tu-berlin.de/thomas schnake/demo gnn lrp

  • The graph isomorphism network (GIN) receives as input the connectivity matrix Λ = A/2 where A is the adjacency matrix augmented with self-connections

Read more

Summary

INTRODUCTION

Many interesting structures found in scientific and industrial applications can be expressed as graphs. The conceptual starting point of our method is the observation that the function implemented by the GNN is locally polynomial with the input graph. This function can be analyzed using a higher-order Taylor expansion to arrive at an attribution of the GNN prediction on collections of edges, e.g. walks into the input graph. We find that the higher-order expansion can be expressed as a nesting of multiple first-order expansions, starting at the top layer of the GNN and moving towards the input layer This theoretical insight enables a principled adaptation of the Layer-wise Relevance Propagation (LRP) [16] explanation technique to GNN models, where the propagation procedure is guided along individual walks in the input graph. The code for this paper can be found at https://git. tu-berlin.de/thomas schnake/paper gnn lrp

Related work
Higher-Order Explanations
Explaining Graph Neural Networks
Graph Neural Networks
First-Order Explanation
Higher-Order Explanation
Nested Computation and Relevant Walks
THE GNN-LRP METHOD
Deep Taylor Decomposition
Deriving GNN-LRP Propagation Rules
Application of GNN-LRP Beyond the GCN Model
Implementing GNN-LRP
Limitations
EVALUATION OF GNN-LRP
From Attribution to Subgraph Selection
Model Activation Task
BA-2motifs Benchmark
NEW INSIGHTS WITH GNN-LRP
Sanity Checks and Other Evaluations
Sentiment Analysis
Quantum Chemistry
Revisiting Image Classification
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call