Abstract

A spatial context is often present in speech-based human-machine interaction, and its role is especially significant in interaction with robotic systems. Studies in the cognitive sciences show that frames of reference used in language and in non-linguistic cognition are correlated. In general, humans may use multiple frames of references. But since the visual sensory modality operates mainly in a relative frame, most of users normally and preferably use relative reference frame in spatial language. Therefore, there is a need to enable dialogue systems to process dialogue acts that instantiate user-centered frames of reference. This paper introduces a cognitively-inspired, computational modeling method that addresses this research question, and illustrates it for a three-party human-machine interaction scenario. The paper also reports on an implementation of the proposed model within a prototype system, and briefly discusses some aspects of the model’s generalizability and scalability.KeywordsHuman-machine interactionspatial perspectiverelative frame of referencefocus treecognition

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.