Abstract
Humanoids can learn motor skills through the programming by demonstration framework, which allows matching the kinematic movements of a robot with those of a human. Continuous goal-directed actions (CGDA) is a framework that can complement the paradigm of robot imitation. Instead of kinematic parameters, its encoding is centered on the changes an action produces on object features. The features can be any measurable characteristic of the object such as color, area, etc. The execution of actions encoded as CGDA allows a robot-configuration independent achievement of tasks, avoiding the correspondence problem. By tracking object features during action execution, we create a trajectory in an n-dimensional feature space that represents object temporal states, allowing generalization, recognition, and execution of action effects on the environment. Experiments have been performed, using a humanoid robot in a simulated environment. Evolutionary computation was used for joint parameter calculation of a humanoid robot. The objective is to generate a motor trajectory which leads to a feature trajectory equal to the objective one. In one of the experiments, the robot performs a spatial trajectory based on spatial object features. In a new experiment, the robot paints a wall by following a color feature encoding.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.