Abstract

Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have become a growing concern among the public due to their frequently-appearing erroneous behaviors. As a result, it is urgent to test the security issues with DNNs systematically before they are deployed to the real world. Fine-grained metrics based on neuron coverage have been provided by existing testing approaches, and various approaches have been proposed to enhance these metrics. However, it has been gradually realized that a higher neuron coverage does not necessarily represent better capabilities in identifying defects that lead to errors. Besides, coverage-guided methods cannot hunt errors due to faulty training procedure. So the robustness improvement of DNNs via retraining by these testing examples are unsatisfactory. To tackle this challenge, we introduce the concept of excitable neurons based on Shapley value and design a white-box testing framework for DNNs, namely DeepSensor. It is motivated by our observation that neurons with larger responsibility towards model loss changes due to small perturbations are more likely related to incorrect corner cases due to potential defects. By maximizing the number of excitable neurons that correspond to various incorrect behaviors of models, DeepSensor can generate testing examples that effectively trigger more erroneous security issues caused by malicious inputs (both adversarial and polluted data) and incomplete training. Extensive experiments implemented on both image classification models and speaker recognition models have demonstrated the superiority of DeepSensor. Compared to the state-of-the-art testing methods, DeepSensor has the ability to find more wrong model behaviors due to malicious inputs (i.e., ∼×1.2 for adversarial and ∼×4.7 for polluted data) and incompletely-trained DNNs. Additionally, it can help DNNs build larger l2-norm robustness bound (∼×3) via retraining according to CLEVER's certification. Furthermore, we provide interpretable certifications for effectiveness of DeepSensor by identifying excitable neurons and visualizations via t-SNE. The open source code of DeepSensor can be downloaded at https://github.com/Allen-piexl/DeepSensor/.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call