Abstract

Negation is a universal but complicated linguistic phenomenon, which has received considerable attention from the NLP community over the last decade, since a negated statement often carries both an explicit negative focus and implicit positive meanings. For the sake of understanding a negated statement, it is critical to precisely detect the negative focus in context. However, how to capture contextual information for negative focus detection is still an open challenge. To well address this, we come up with an attention-based neural network to model contextual information. In particular, we introduce a framework which consists of a Bidirectional Long Short-Term Memory (BiLSTM) neural network and a Conditional Random Fields (CRF) layer to effectively encode the order information and the long-range context dependency in a sentence. Moreover, we design two types of attention mechanisms, word-level contextual attention and topic-level contextual attention, to take advantage of contextual information across sentences from both the word perspective and the topic perspective, respectively. Experimental results on the SEM’12 shared task corpus show that our approach achieves the best performance on negative focus detection, yielding an absolute improvement of 2.11% over the state-of-the-art. This demonstrates the great effectiveness of the two types of contextual attention mechanisms.

Highlights

  • As a linguistic phenomenon that reverses the polarity of a statement or its property into opposite, negation is ubiquitous in human languages

  • In sentence S1, the negative focus is the propositional clause until the market close, yielding the interpretation that mutual fund trades take effect, but not until the market close

  • Experimentation on the SEM’12 shared task corpus (Morante and Blanco, 2012) shows that our approach achieves the best performance on negative focus detection task, yielding an absolute improvement of 2.11% over the state-of-the-art

Read more

Summary

Introduction

As a linguistic phenomenon that reverses the polarity of a statement or its property into opposite, negation is ubiquitous in human languages. A negative affix -n’t negates the statement that mutual fund trades take effect until the market close. Scenario III points out the reason of not take effect is that the shareholders stayed put today, which corresponds to the negative focus until the market close. It is worth noting that a negative focus is often not syntactically determined by sentence structure, instead pragmatically judged by authors’ intentions conveyed in context This is consistent with the attention mechanism in neural networks which has been proven effective to improve the performances for many NLP tasks, such as sentiment analysis (Wang et al, 2016) and relation classification (Zhou et al, 2016).

Related Work
BiLSTM-CRF Framework
Word-level Contextual Attention
Topic-level Contextual Attention
Experimentation
Our Methods
Dataset and Settings
Results
Method No Attention
Analysis and Discussion
Method
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call