Abstract

Imitation in artificial systems involves a number of important aspects, such as extracting the relevant features of the demonstrated behaviour, inverse mapping observations, and executing motor commands. In this article we focus on how an artificial system can infer what the demonstrator intended to do. The model that we propose draws inspiration from developmental psychology and contains three crucial features. One is that the imitating agent needs repeated trials, thus stepping away from the one-shot learning by demonstration paradigm. The second is that the imitating agent needs a learning method in which it keeps track of intentions to reach goals. The third feature is that the model does not require an external measure of equivalence; instead, the demonstrator decides whether an attempted imitation was equivalent to the demonstration or not. We present a computational model and its simulation results, which underpin our theory of goal-directed imitation on an artificial system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.