Abstract

Describes research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and develop a construction method of a human task model which integrates multiple observations to solve ambiguity based on attention points (AP). So far, this analysis constructs a symbolic task model efficiently in a coarse-to-fine way through two steps. However, to represent delicate motion appearing in a task, the system must incorporate the information about precise motion of the manipulated objects into an abstract task model. We propose a method to identify the manipulated object through repeated observations of both human and robot behavior. To this end, we present a method which combines 2D and 3D template matching techniques to localize an object in 3D space generated from a depth and an intensity image. We apply this technique to recognition of human and robot behavior by obtaining the precise trajectory of the manipulated objects. We also present the experimental results achieved through the use of a human-form robot equipped with a 9-eye stereo vision system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.