Abstract

Postural synergies allow a rich set of hand configurations to be represented in lower dimension space compared to the original joint space. In our previous works, we have shown that this can be extended to trajectories thanks to the multivariate functional principal component analysis, obtaining a set of basis functions able to represent grasping movements learned from human demonstration. In this article, we introduce a human cognition-inspired approach for generalizing and improving robot grasping skills in this motion synergies subspace. The use of a reinforcement learning (RL) algorithm allows the robot to explore the surrounding space and improve its capability to reach and grasp objects. The learning method is the policy improvement with path integrals, running in the policy space. Bootstrapped with synergy coefficients obtained from neural networks, the policy reward is based on a force closure grasp quality index computed at the end of the task, measuring how a firm is the grip. We finally show that combining neural networks and RL allows the robot manipulator to have a good initial estimate of the grasping configuration and faster convergence to an optimal grasp with respect to a database approach, the latter a less general solution in presence of new objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call