Abstract

Deep neural networks (DNNs) have achieved remarkable success in various tasks such as image classification, speech recognition, and natural language processing. However, DNNs have proven to be vulnerable to attacks from adversarial examples. These samples are generated by adding some imperceptible disturbances, which are used to mislead the output decision of the deep learning model and bring significant security risks to the system. However, previous research mainly focused on computer vision, thus neglecting the security issues of natural language processing models. Since the text data is discrete, the existing methods in the image field cannot directly use the text. This article summarized the research on adversarial attacks and defenses in natural language processing and looked forward to future research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call