Abstract

In recent years, deep learning technology has made excellent achievements in many fields, such as computer vision, natural language processing, and speech processing. More and more related applications have appeared, which has brought a lot of convenience to people’s lives. However, while deep learning performs well, it also has the flaws in being vulnerable. The attack method can make the deep learning model make wrong judgments by adding some small disturbances to the input samples. This brings huge security issues to deep learn applications At present, there have been many research results from defense against attacks. This article first analyzes a variety of classic adversarial attack methods in detail, and classifies these attack methods according to their attack scope. The differences and common points are abstract from the classification, and it is found that there are sensitive points in the counter disturbance, and the fluctuation of the sensitive points affects the classification of the deep learning model. Inspired by this, we propose a defense strategy that only filters sensitive points in the adversarial sample, avoiding the processing of non-sensitive points, reducing calculations, and improving efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.