Abstract

There is a lot of overlap between the core problems of information retrieval (IR) and natural language processing (NLP). An IR system gains from understanding a user need and from understanding documents, and hence being able to determine whether a document has information that satisfies the user need. Much of NLP is about the same thing: Natural language understanding aims to understand the meaning of questions and documents and meaning relationships. The exciting recent application of deep learning approaches in NLP has brought new tools for effectively understanding language semantics. In principle, there should be a lot of synergy, though in practice the concerns of IR on large systems and macro-scale understanding have tended to contrast with the emphasis in NLP on language structure and micro-scale understanding. My talk will emphasize the two topics of how NLP can contribute to understanding textual relationships and how deep learning approaches substantially aid in this goal. One basic -- and very successful tool -- has been the new generation of distributed word representations: neural word embeddings. However, beyond just word meanings, we need to understand how to compose the meanings of larger pieces of text. Two requirements for that are good ways to understand the structure of human language utterances and ways to compose their meanings. Deep learning methods can help for both tasks. Finally, we need to understand relationships between pieces of text, to be able to do tasks such as Natural Language Inference (or Recognizing Textual Entailment) and Question Answering, and I will look at some of our recent work in these areas, both with and without the help of neural networks

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call