Abstract
Both artificial and biological systems are faced with the challenge of noisy and uncertain estimation of the state of the world, in contexts where feedback is often delayed. This challenge also applies to the processes of language production and comprehension, both when they take place in isolation (e.g., in monologue or solo reading) and when they are combined as is the case in dialogue. Crucially, we argue, dialogue brings with it some unique challenges. In this paper, we describe three such challenges within the general framework of control theory, drawing analogies to mechanical and biological systems where possible: (1) the need to distinguish between self- and other-generated utterances; (2) the need to adjust the amount of advance planning (i.e., the degree to which planning precedes articulation) flexibly to achieve timely turn-taking; (3) the need to track changing conversational goals. We show that message-to-sound models of language production (i.e., those that cover the whole process from message generation to articulation) tend to implement fairly simple control architectures. However, we argue that more sophisticated control architectures are necessary to build language production models that can account for both monologue and dialogue.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.