Abstract

Accompanying the advancement of deep learning models, which are now used in many different areas such as Nature Language Processing (NLP), Computer Vision (CV), and so on, the computationally exceedingly powerful deep ability of deep learning has outstanding performance in handling various tasks, have become a hot topic of concern. At present, research illustrates that if the input samples are interfered with using the adversarial sample technique, it can make most mainstream neural network models make wrong judgment results. Therefore, it becomes an important issue how to compensate for the shortcomings of existing neural network techniques in terms of security and robustness. This paper first introduces the development of adversarial attack techniques and then describes the theoretical foundations, algorithms, and applications. Then this paper designs an experiment to verify whether ResNet18 can be attacked under adversarial attacks and then subsequently discusses the open problems and challenges deep learning faces.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.