Abstract

Deep learning models have excelled in both academia and practice in recent years and have made many developments in various areas. Research has shown that they are inherently vulnerable to attacks by adversarial samples that make them misleading. By studying adversarial attack samples in the field of deep learning security, not only the potential attacks on the models can be reduced, but also their properties can be used to make further improvements to deep learning models. This paper reviews the existing results of adversarial attack techniques for deep neural networks. Firstly, the definition, classification criteria, and development of adversarial attacks are introduced, then the classical white-box and black-box attack methods at the present stage are compared and analyzed, and finally, a summary and outlook are made.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call