Abstract

Programming-by-Demonstration (PbD) is a central research topic in robotics since it is an important part of human-robot interaction. A key scientific challenge in PbD is to make robots capable of imitating a human. PbD means to instruct a robot how to perform a novel task by observing a human demonstrator performing it. Current research has demonstrated that PbD is a promising approach for effective task learning which greatly simplifies the programming process (Calinon et al., 2007), (Pardowitz et al., 2007), (Skoglund et al., 2007) and (Takamatsu et al., 2007). In this chapter a method for imitation learning is presented, based on fuzzy modeling and a next-state-planner in a PbD framework. For recent and comprehensive overviews of PbD, (also called “Imitation Learning” or “Learning from Demostration”) see (Argall et al., 2009), (Billard et al., 2008) or (Bandera et al., 2007). What might occur as a straightforward idea to copy human motion trajectories using a simple teaching-playback method, it turns out to be unrealistic for several reasons. As pointed out by Nehaniv & Dautenhahn (2002), there is significant difference in morphology between body of the robot and the robot, in imitation learning known as the correspondence problem. Further complicating the picture, the initial location of the human demonstrator and the robot in relation to task (i.e., object) might force the robot, into unreachable sections of the workspace or singular arm configurations. Moreover, in a grasping scenario it will not be possible to reproduce the motions of the human hand since there so far do not exist any robotic hand that can match the human hand in terms of functionality and sensing. In this chapter we will demonstrate that the robot can generate an appropriate reaching motion towards the target object, provided that a robotic hand with autonomous grasping capabilities is used to execute the grasp. In the approach we present here the robot observes a human first demonstrating the environment of the task (i.e., objects of interest) and the the actual task. This knowledge, i.e., grasp-related object properties, hand-object relational trajectories, and coordination of reachand-grasp motions is encoded and generalized in terms of hand-state space trajectories. The hand-state components are defined such that they are invariant with respect to perception, and includes the mapping between the human and robot hand, i.e., the correspondence. To 24

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call