Abstract

Effective testing methods have been proposed to verify the reliability and robustness of Deep Neural Networks (DNNs). However, enhancing their adversarial robustness against various attacks and perturbations through testing remains a key issue for their further applications. Therefore, we propose DeepRTest, a white-box testing framework for DNNs guided by vulnerability to effectively test and improve the adversarial robustness of DNNs. Specifically, the test input generation algorithm based on joint optimization fully induces the misclassification of DNNs. The generated high neuron coverage inputs near classification boundaries expose vulnerabilities to test adversarial robustness comprehensively. Then, retraining based on the generated inputs effectively optimize the classification boundaries and fix the vulnerabilities to improve the adversarial robustness against perturbations. The experimental results indicate that DeepRTest achieved higher neuron coverage and classification accuracy than baseline methods. Moreover, DeepRTest could improve the adversarial robustness by 39% on average, which was 12.56% higher than other methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.