GOAL: Graph Neural Networks (GNNs) are being increasingly adopted in various real-world applications, including drug and material discovery [16, 17, 27], recommendation engines [29], congestion modeling on road networks [5], and weather forecasting [11]. However, similar to other deeplearning models, GNNs are considered black boxes due to their limited capacity to provide explanations for their predictions. This lack of interpretability poses a significant barrier to their adoption in critical domains such as healthcare, finance, and law enforcement, where transparency and trustworthiness are essential for decision-making processes. In these pivotal domains, understanding the rationale behind model predictions is crucial not only for compliance with interpretability requirements but also for identifying potential vulnerabilities and gaining insights to refine the model further. How do we make GNNs interpretable? This is a central motivating question driving my research pursuits.
Read full abstract