Abstract

Predicting the hand and fingers posture during grasping tasks is an important issue in the frame of biomechanics. In this paper, a technique based on neural networks is proposed to learn the inverse kinematics mapping between the fingertip 3D position and the corresponding joint angles. Finger movements are obtained by an instrumented glove and are mapped to a multichain model of the hand. From the fingertip desired position, the neural networks allow predicting the corresponding finger joint angles keeping the specific subject coordination patterns. Two sets of movements are considered in this study. The first one, the training set, consisting of free fingers movements is used to construct the mapping between fingertip position and joint angles. The second one, constructed for testing purposes, is composed of a sequence of grasping tasks of everyday-life objects. The maximal mean error between fingertip measured position and fingertip position obtained from simulated joint angles and forward kinematics is 0.99±0.76 mm for the training set and 1.49±1.62 mm for the test set. Also, the maximal RMS error of joint angles prediction is 2.85° and 5.10° for the training and test sets respectively, while the maximal mean joint angles prediction error is −0.11±4.34° and −2.52±6.71° for the training and test sets, respectively. Results relative to the learning and generalization capabilities of this architecture are also presented and discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call