Abstract

A prerequisite for achieving brain-like intelligence is the ability to rapidly learn new behaviors and actions. A fundamental mechanism for rapid learning in humans is imitation: children routinely learn new skills (e.g., opening a door or tying a shoe lace) by imitating their parents; adults continue to learn by imitating skilled instructors (e.g., in tennis). In this chapter, we propose a probabilistic framework for imitation learning in robots that is inspired by how humans learn from imitation and exploration. Rather than relying on complex (and often brittle) physics-based models, the robot learns a dynamic Bayesian network that captures its dynamics directly in terms of sensor measurements and actions during an imitation-guided exploration phase. After learning, actions are selected based on probabilistic inference in the learned Bayesian network. We present results demonstrating that a 25-degree-of-freedom humanoid robot can learn dynamically stable, full-body imitative motions simply by observing a human demonstrator.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.