Abstract

A spatial context is often present in speech-based human-machine interaction, and its role is especially significant in interaction with robotic systems. Studies in the cognitive sciences show that frames of reference used in language and in non-linguistic cognition are correlated. In general, humans may use multiple frames of references. But since the visual sensory modality operates mainly in a relative frame, most of users normally and preferably use relative reference frame in spatial language. Therefore, there is a need to enable dialogue systems to process dialogue acts that instantiate user-centered frames of reference. This paper introduces a cognitively-inspired, computational modeling method that addresses this research question, and illustrates it for a three-party human-machine interaction scenario. The paper also reports on an implementation of the proposed model within a prototype system, and briefly discusses some aspects of the model’s generalizability and scalability.KeywordsHuman-machine interactionspatial perspectiverelative frame of referencefocus treecognition

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call