Abstract

Deep neural networks have recently been found to be vulnerable to adversarial examples, which can deceive attacked models with high confidence. This has given rise to significant security threats and raised doubts about the reliability of deploying deep learning models in security-critical domains. Therefore, effectively dealing with various adversarial examples has become an essential but challenging requirement. Adversarial example detection can predict the existence of adversarial examples in advance. However, existing detection methods are usually restricted to specific attacks, lacking generalization and decision-making bases. In this paper, we discover that the common characteristic of adversarial examples is to alter the semantic information recognized by models, which can effectively distinguish adversarial examples and provide interpretability. Based on this perspective, we propose a semantic graph matching (SeMatch) method to execute attack-agnostic adversarial example detection. SeMatch detects adversarial examples by comparing the constructed semantic graphs with the semantic graph prototypes of their predicted classes and further corrects their classification results. Experimental results demonstrate that SeMatch can effectively detect and classify adversarial examples in several attack settings, achieving an average detection and classification accuracy of 96.01% and 89.71%, respectively. When applied to unknown attack scenarios, SeMatch is more effective and interpretable than state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call