Abstract

Textual materials represent a rich source of information for improving the decision-making of people, businesses and organizations. However, for natural language processing (NLP), it is difficult to correctly infer the meaning of narrative content in the presence of negations. The reason is that negations can be formulated both explicitly (e.g., by negation words such as “not”) or implicitly (e.g., by expressions that invert meanings such as “forbid”) and that their use is further domain-specific. Hence, NLP requires a dynamic learning framework for detecting negations and, to this end, we develop a reinforcement learning framework for this task. Formally, our approach takes document-level labels (e.g., sentiment scores) as input and then learns a negation policy based on the document-level labels. In this sense, our approach replicates human perceptions as provided by the document-level labels and achieves a superior prediction performance. Furthermore, it benefits from weak supervision; meaning that the need for granular and thus expensive word-level annotations, as in prior literature, is replaced by document-level annotations. In addition, we propose an approach to interpretability: by evaluating the state-action table, we yield a novel form of statistical inference that allows us to test which linguistic cues act as negations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.