Abstract

Convolutional neural networks (CNNs) are widely used for image classification, but their vulnerability to adversarial attacks poses challenges to their reliability and security. However, current adversarial robustness (AR) measures lack a theoretical foundation, limiting the insight into the decision process. To address this issue, we propose a new AR evaluation framework based on Graph of Patterns (GoPs) models and graph distance algorithms. Our approach provides a fine-grained analysis of AR from three perspectives, providing targeted insight into the vulnerability of CNNs. Compared to current standards, our approach is theoretically grounded and allows fine-tuning of model components without repeated attempts and validation. Our experimental results demonstrate its effectiveness in uncovering hidden vulnerabilities in CNNs and providing actionable approaches to improve their AR. Our GoPs modeling approach and graph distance algorithms can be extended to apply to other graph machine learning tasks such as Metric Learning on multi-relational graphs. Overall, our framework represents significant progress in AR evaluation, providing a more interpretable, targeted, and efficient approach to assess CNN robustness in complex graph-based systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call