Abstract

Deep Neural Networks (DNNs) have achieved remarkable results in multiple Natural Language Processing (NLP) applications. However, current studies have found that DNNs can be fooled when using modified samples, namely adversarial examples. This work, specifically, examines DNNs for sentiment analysis using adversarial examples. We particularly aim to examine the impact of modifying the Part-Of-Speech (POS) of words on the input sentences. We conduct extensive experiments on different neural network models across several real-world datasets. The results demonstrate that current DNN models for sentiment analysis are brittle with perturbed noisy words that humans do not have trouble understanding. An interesting finding is that adjective words (Adj) and the combination of adjective and adverb words (Adj-Adv) provide obvious contribution to fooling sentiment analysis DNN models1.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call