Abstract

Due to the conspicuous ability to capture topology characteristics, graph neural networks (GNN) have been widely used in botnet detection and proven efficient. However, the blackbox nature of GNN models creates an obstacle for users to trust these classified instruments. In addition to high accuracy, stakeholders also hope that these models are consistent with human cognition. To cope with this problem, we propose a method to evaluate the trustworthiness of GNN-based botnet detection models, called BD-GNNExplainer. Concretely, BD-GNNExplainer extracts the data that contribute the most to GNN’s decision by reducing the loss between the classification results generated by the selected subgraph as the GNN model’s input and the results generated by the entire graph as input. We calculate the relevance between the model-relied data and the informative data to quantify a score expressing interpretability. For different-structure GNN models, these scores will tell us which one is more trustworthy and ultimately become an essential basis for model optimization. To the best of our knowledge, our work is the first time to discuss the interpretability of botnet detection systems and will provide a guideline for making the botnet detection methodology more understandable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call