Abstract
Dexterous reaching, pointing, and grasping play a critical role in human interactions with tools and the environment, and it also allows individuals to interact with one another effectively in social settings. Developing robotic systems with mental simulation and imitation learning abilities for such tasks seems a promising way to enhance robot performance as well as to enable interactions with humans in a social context. In spite of important advances in artificial intelligence and smart robotics, current robotic systems lack the flexibility and adaptability that humans so naturally exhibit. Here we present and study a neural architecture that captures some critical visuo-spatial transformations that are required for the cognitive processes of mental simulation and imitation. The results show that our neural model can perform accurate, flexible and robust 3D unimanual and bimanual actual/imagined reaching movements while avoiding extreme joint positions and generating kinematics similar to those observed with humans. In addition, using visuo-spatial transformations, the neural model was able to observe/imitate bimanual arm reaching movements independently of the viewpoint, distance and anthropometry between the demonstrator and imitator. Our model is a first step towards developing a more advanced neurally-inspired hierarchical architecture that integrates mental simulation and sensorimotor processing as it learns to imitate dexterous bimanual arm movements.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.