Abstract

A primary goal of the field of Human-Robot Interaction is to allow for natural human-robot interactions, and thus robot architectures must eventually be able to understand truly natural human speech. And yet, despite the abundance of research devoted to language understanding, most robots capable of participating in linguistic interactions are only able to understand relatively simple utterances (e.g., commands), and do not consider those utterances’ deeper implications. We believe that this suggests a shortcoming of the current state of the art: humans do not typically restrict themselves to commands, and humans’ intentions are often not derivable from the semantic content of the utterances they employ. Indeed, most human language is intentionally indirect and ambiguous so as to conform with social conventions (e.g., politeness). If we desire truly natural human-robot interactions, we must thus go beyond the command-based paradigm characterizing most current robot architectures. While a few architectures have made first steps toward a deeper understanding of human utterances, these have not attempted to represent a robot’s certainty in its beliefs or perceptions. As human utterances are rife with both intentional and incidental ambiguity, we believe such systems are ill-equipped for use in the real world. Our research seeks to address the shortcomings of current architectures by developing mechanisms for natural language understanding and generation. These mechanisms use the robot’s goal-based, social, and environmental knowledge

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call