Abstract

A central goal of Natural Language Processing (NLP) and of Artificial Intelligence (AI) in general is the creation of conversational agents. In order for a conversational agent to pass the Turing test and be called intelligent, it has to be open domain: it has to be able to converse about nearly anything [2]. It follows that methods that represent word meaning to support conversational agents should also be as general and comprehensive as possible. A good starting point to determine the desirable characteristics of such a method is grounding in dialogue [1]: maintaining the common ground necessary for mutual understanding between participants. Errors in grounding can make conversation difficult to maintain or can even cause it to completely break down. The analysis of these errors point to three beneficial characteristics for the method, two of which are components of comprehensiveness: the method should be robust to work with corrupted or unintelligible words due to e.g., Automatic Speech Recognition (ASR) errors, and it should be open so it can assign meaning to almost any word regardless the domain. It should also be transparent: the representation it assigns to words should be easy to understand for human users. In the first part of the dissertation, I propose a method designed according to these principles. As open word interpretation is a fundamental, unsolved problem of NLP, I relax this condition: the method is flexible in that the meaning representation it assigns towords is more graded than a single sense, and it can assign meaning to words whose meaning is unknown using related concepts. I call the method Robust and Flexible Word Explanation (RFWE). The method has been incorporated into a webportal to help participants from different scientific disciplines establish common ground and communicate better. I estimate the RFWE interpretation vectors with a fast neural network architecture, which is important for real world applications. Lastly, I introduce a method based on RFWE to determine unintelligible words from their textual contexts, like the words not understood by ASR in a dialogue system. In the second part, I demonstrate the usefulness of word meaning in practice on problems closely connected to dialogue systems: the automatic generation of word puzzles and next utterance classification, where the task is determining the correct next response in a conversation. REFERENCES [1] Herbert H Clark, Susan E Brennan, et al. “Grounding in communication”. In: Perspectives on socially shared cognition 13.1991 (1991), pp. 127– 149. [2] Alan M Turing. “Computing machinery and intelligence”. In: Mind 59.236 (1950), pp. 433–460.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.