Abstract

Manually programming robots is difficult, impeding more widespread use of robotic systems. In response, efforts are being made to develop robots that use imitation learning. With such systems a robot learns by watching humans perform tasks. However, most imitation learning systems replicate a demonstrator’s actions rather than obtaining a deeper understanding of why those actions occurred. Here we introduce an imitation learning framework based on causal reasoning that infers a demonstrator’s intentions. As with imitation learning in people, our approach constructs an explanation for a demonstrator’s actions, and generates a plan based on this explanation to carry out the same goals rather than trying to faithfully reproduce the demonstrator’s precise motor actions. This enables generalization to new situations. We present novel causal inference algorithms for imitation learning and establish their soundness, completeness and complexity characteristics. Our approach is validated using a physical robot, which successfully learns and generalizes skills involving bimanual manipulation. Human performance on similar skills is reported. Computer experiments using the Monroe Plan Corpus further validate our approach. These results suggest that causal reasoning is an effective unifying principle for imitation learning. Our system provides a platform for exploring neural implementations of this principle in future work.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.