Abstract

Abstract In the Internet of Things, smart devices are expected to correctly capture and process data from environments, regardless of perturbation and adversarial attacks. Therefore, it is important to guarantee the robustness of their intelligent components, e.g. neural networks, to protect the system from environment perturbation and adversarial attacks. In this paper, we propose a formal verification technique for rigorously proving the robustness of neural networks. Our approach leverages a tight liner approximation technique and constraint substitution, by which we transform the robustness verification problem into an efficiently solvable linear programming problem. Unlike existing approaches, our approach can automatically generate adversarial examples when a neural network fails to verify. Besides, it is general and applicable to more complex neural network architectures such as CNN, LeNet and ResNet. We implement the approach in a prototype tool called WiNR and evaluate it on extensive benchmarks, including Fashion MNIST, CIFAR10 and GTSRB. Experimental results show that WiNR can verify neural networks that contain over 10 000 neurons on one input image in a minute with a 6.28% probability of false positive on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call