Abstract

In this paper, we propose an approach for learning task specifications automatically, by observing human demonstrations. Using this approach allows a robot to combine representations of individual actions to achieve a high-level goal. We hypothesize that task specifications consist of variables that present a pattern of change that is invariant across demonstrations. We identify these specifications at different stages of task completion. Changes in task constraints allow us to identify transitions in the task description and to segment them into subtasks. We extract the following task-space constraints: 1) the reference frame in which to express the task variables; 2) the variable of interest at each time step, position, or force at the end effector; and 3) a factor that can modulate the contribution of force and position in a hybrid impedance controller. The approach was validated on a seven-degree-of-freedom Kuka arm, performing two different tasks: grating vegetables and extracting a battery from a charging stand.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.