Abstract
Abstract : Imagine mobile robots of the future working side by side with humans, collaborating in a shared workspace. For this to become a reality, robots must be able to do something that humans do constantly: understand how others perceive space and the relative positions of objects around them -- they need the ability to see things from another person's point of view. The authors' research group and others are building computational, cognitive, and linguistic models that can deal with frames of reference. Issues include dealing with constantly changing frames of reference, changes in spatial perspective, understanding what actions to take, the use of new words, and common ground. Their approach is an implementation informed by cognitive and computational theories. It is based on developing computational cognitive models (CCMs) of certain high-level cognitive skills humans possess that are relevant for collaborative tasks. They then use these models as reasoning mechanisms for their robots. Why do they propose using CCMs as opposed to more traditional programming paradigms for robots? They believe that by giving the robots similar representations and reasoning mechanisms to those used by humans, they will build robots that act in a way that is more compatible with humans.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.