Abstract

Deep neural networks (DNNs) become increasingly popular. However, the vulnerability of DNNs can lead to a performance decrease when they cannot correctly predict the given samples. We propose a repair method for DNN-based classifiers to solve this problem, such that the accuracy can be improved by modifying the parameters of a DNN. First, we transform the DNN repair problem into a linear programming model, by encoding the constraints and the objective in linear programming. Second, to reduce the scale of the LP model, we repair the DNN by considering the parameters in the last layer. Third, to enhance the accuracy on the previously wrongly predicted samples without sacrificing the accuracy on the previously correctly predicted samples, we adopt these two types of samples in the optimization process. The evaluation on two popular datasets shows that our method outperforms the state-of-the-art methods and improves the accuracies by $$25.4\%$$ points in the adversarial attacking scenario and $$67.6\%$$ points in the backdooring attacking scenario. Meanwhile, our method can avoid obvious accuracy decreasing on standard test sets, which is at most 0.5%. The extensive experimentation demonstrates that the proposed method is effective and efficient in repairing DNN based classifiers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.