Abstract

In this paper, a novel framework which enables humanoid robots to learn new skills from demonstration is proposed. The proposed framework makes use of real-time human motion imitation module as a demonstration interface for providing the desired motion to the learning module in an efficient and user-friendly way. This interface overcomes many problems of the currently used interfaces like direct motion recording, kinesthetic teaching, and immersive teleoperation. This method gives the human demonstrator the ability to control almost all body parts of the humanoid robot in real time (including hand shape and orientation which are essential to perform object grasping). The humanoid robot is controlled remotely and without using any sophisticated haptic devices, where it depends only on an inexpensive Kinect sensor and two additional force sensors. To the best of our knowledge, this is the first time for Kinect sensor to be used in estimating hand shape and orientation for object grasping within the field of real-time human motion imitation. Then, the observed motions are projected onto a latent space using Gaussian process latent variable model to extract the relevant features. These relevant features are then used to train regression models through the variational heteroscedastic Gaussian process regression algorithm which is proved to be a very accurate and very fast regression algorithm. Our proposed framework is validated using different activities concerned with both human upper and lower body parts and object grasping also.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call